Regenerate integrations.js (#16062)

Co-authored-by: ilyam8 <ilyam8@users.noreply.github.com>
Co-authored-by: Fotis Voutsas <fotis@netdata.cloud>
This commit is contained in:
Netdata bot 2023-10-02 16:09:55 +03:00 committed by GitHub
parent 06da204652
commit 8be14d7cdd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
290 changed files with 32146 additions and 7139 deletions

View File

@ -1125,6 +1125,8 @@ If you don't see the app/service you'd like to monitor in this list:
- [Network UPS Tools](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/network_ups_tools.md)
- [UPS (NUT)](https://github.com/netdata/go.d.plugin/blob/master/modules/upsd/integrations/ups_nut.md)
### VPNs
- [Fastd](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/fastd.md)

View File

@ -0,0 +1,114 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/apps.plugin/integrations/applications.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/apps.plugin/metadata.yaml"
sidebar_label: "Applications"
learn_status: "Published"
learn_rel_path: "Data Collection/Processes and System Services"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Applications
Plugin: apps.plugin
Module: apps
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Applications for optimal software performance and resource usage.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per Applications instance
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.cpu | a dimension per app group | percentage |
| apps.cpu_user | a dimension per app group | percentage |
| apps.cpu_system | a dimension per app group | percentage |
| apps.cpu_guest | a dimension per app group | percentage |
| apps.mem | a dimension per app group | MiB |
| apps.rss | a dimension per app group | MiB |
| apps.vmem | a dimension per app group | MiB |
| apps.swap | a dimension per app group | MiB |
| apps.major_faults | a dimension per app group | page faults/s |
| apps.minor_faults | a dimension per app group | page faults/s |
| apps.preads | a dimension per app group | KiB/s |
| apps.pwrites | a dimension per app group | KiB/s |
| apps.lreads | a dimension per app group | KiB/s |
| apps.lwrites | a dimension per app group | KiB/s |
| apps.threads | a dimension per app group | threads |
| apps.processes | a dimension per app group | processes |
| apps.voluntary_ctxt_switches | a dimension per app group | processes |
| apps.involuntary_ctxt_switches | a dimension per app group | processes |
| apps.uptime | a dimension per app group | seconds |
| apps.uptime_min | a dimension per app group | seconds |
| apps.uptime_avg | a dimension per app group | seconds |
| apps.uptime_max | a dimension per app group | seconds |
| apps.files | a dimension per app group | open files |
| apps.sockets | a dimension per app group | open sockets |
| apps.pipes | a dimension per app group | open pipes |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,114 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/apps.plugin/integrations/user_groups.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/apps.plugin/metadata.yaml"
sidebar_label: "User Groups"
learn_status: "Published"
learn_rel_path: "Data Collection/Processes and System Services"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# User Groups
Plugin: apps.plugin
Module: groups
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
This integration monitors resource utilization on a user groups context.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per User Groups instance
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| groups.cpu | a dimension per user group | percentage |
| groups.cpu_user | a dimension per user group | percentage |
| groups.cpu_system | a dimension per user group | percentage |
| groups.cpu_guest | a dimension per user group | percentage |
| groups.mem | a dimension per user group | MiB |
| groups.rss | a dimension per user group | MiB |
| groups.vmem | a dimension per user group | MiB |
| groups.swap | a dimension per user group | MiB |
| groups.major_faults | a dimension per user group | page faults/s |
| groups.minor_faults | a dimension per user group | page faults/s |
| groups.preads | a dimension per user group | KiB/s |
| groups.pwrites | a dimension per user group | KiB/s |
| groups.lreads | a dimension per user group | KiB/s |
| groups.lwrites | a dimension per user group | KiB/s |
| groups.threads | a dimension per user group | threads |
| groups.processes | a dimension per user group | processes |
| groups.voluntary_ctxt_switches | a dimension per app group | processes |
| groups.involuntary_ctxt_switches | a dimension per app group | processes |
| groups.uptime | a dimension per user group | seconds |
| groups.uptime_min | a dimension per user group | seconds |
| groups.uptime_avg | a dimension per user group | seconds |
| groups.uptime_max | a dimension per user group | seconds |
| groups.files | a dimension per user group | open files |
| groups.sockets | a dimension per user group | open sockets |
| groups.pipes | a dimension per user group | open pipes |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,114 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/apps.plugin/integrations/users.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/apps.plugin/metadata.yaml"
sidebar_label: "Users"
learn_status: "Published"
learn_rel_path: "Data Collection/Processes and System Services"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Users
Plugin: apps.plugin
Module: users
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
This integration monitors resource utilization on a user context.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per Users instance
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| users.cpu | a dimension per user | percentage |
| users.cpu_user | a dimension per user | percentage |
| users.cpu_system | a dimension per user | percentage |
| users.cpu_guest | a dimension per user | percentage |
| users.mem | a dimension per user | MiB |
| users.rss | a dimension per user | MiB |
| users.vmem | a dimension per user | MiB |
| users.swap | a dimension per user | MiB |
| users.major_faults | a dimension per user | page faults/s |
| users.minor_faults | a dimension per user | page faults/s |
| users.preads | a dimension per user | KiB/s |
| users.pwrites | a dimension per user | KiB/s |
| users.lreads | a dimension per user | KiB/s |
| users.lwrites | a dimension per user | KiB/s |
| users.threads | a dimension per user | threads |
| users.processes | a dimension per user | processes |
| users.voluntary_ctxt_switches | a dimension per app group | processes |
| users.involuntary_ctxt_switches | a dimension per app group | processes |
| users.uptime | a dimension per user | seconds |
| users.uptime_min | a dimension per user | seconds |
| users.uptime_avg | a dimension per user | seconds |
| users.uptime_max | a dimension per user | seconds |
| users.files | a dimension per user | open files |
| users.sockets | a dimension per user | open sockets |
| users.pipes | a dimension per user | open pipes |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/containers.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Containers
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Containers for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cpu_limit | used | percentage |
| cgroup.cpu | user, system | percentage |
| cgroup.cpu_per_core | a dimension per core | percentage |
| cgroup.throttled | throttled | percentage |
| cgroup.throttled_duration | duration | ms |
| cgroup.cpu_shares | shares | shares |
| cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| cgroup.writeback | dirty, writeback | MiB |
| cgroup.mem_activity | in, out | MiB/s |
| cgroup.pgfaults | pgfault, swap | MiB/s |
| cgroup.mem_usage | ram, swap | MiB |
| cgroup.mem_usage_limit | available, used | MiB |
| cgroup.mem_utilization | utilization | percentage |
| cgroup.mem_failcnt | failures | count |
| cgroup.io | read, write | KiB/s |
| cgroup.serviced_ops | read, write | operations/s |
| cgroup.throttle_io | read, write | KiB/s |
| cgroup.throttle_serviced_ops | read, write | operations/s |
| cgroup.queued_ops | read, write | operations |
| cgroup.merged_ops | read, write | operations/s |
| cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_some_pressure_stall_time | time | ms |
| cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_full_pressure_stall_time | time | ms |
| cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| cgroup.memory_some_pressure_stall_time | time | ms |
| cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| cgroup.memory_full_pressure_stall_time | time | ms |
| cgroup.io_some_pressure | some10, some60, some300 | percentage |
| cgroup.io_some_pressure_stall_time | time | ms |
| cgroup.io_full_pressure | some10, some60, some300 | percentage |
| cgroup.io_full_pressure_stall_time | time | ms |
### Per cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
| device | TBD |
| interface_type | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_net | received, sent | kilobits/s |
| cgroup.net_packets | received, sent, multicast | pps |
| cgroup.net_errors | inbound, outbound | errors/s |
| cgroup.net_drops | inbound, outbound | errors/s |
| cgroup.net_fifo | receive, transmit | errors/s |
| cgroup.net_compressed | receive, sent | pps |
| cgroup.net_events | frames, collisions, carrier | events/s |
| cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| cgroup.net_carrier | up, down | state |
| cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.mem_usage | cgroup memory utilization |
| [ cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,180 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/kubernetes_containers.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Kubernetes Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Kubernetes Containers
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Containers for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per k8s cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| k8s_namespace | TBD |
| k8s_pod_name | TBD |
| k8s_pod_uid | TBD |
| k8s_controller_kind | TBD |
| k8s_controller_name | TBD |
| k8s_node_name | TBD |
| k8s_container_name | TBD |
| k8s_container_id | TBD |
| k8s_kind | TBD |
| k8s_qos_class | TBD |
| k8s_cluster_id | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| k8s.cgroup.cpu_limit | used | percentage |
| k8s.cgroup.cpu | user, system | percentage |
| k8s.cgroup.cpu_per_core | a dimension per core | percentage |
| k8s.cgroup.throttled | throttled | percentage |
| k8s.cgroup.throttled_duration | duration | ms |
| k8s.cgroup.cpu_shares | shares | shares |
| k8s.cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| k8s.cgroup.writeback | dirty, writeback | MiB |
| k8s.cgroup.mem_activity | in, out | MiB/s |
| k8s.cgroup.pgfaults | pgfault, swap | MiB/s |
| k8s.cgroup.mem_usage | ram, swap | MiB |
| k8s.cgroup.mem_usage_limit | available, used | MiB |
| k8s.cgroup.mem_utilization | utilization | percentage |
| k8s.cgroup.mem_failcnt | failures | count |
| k8s.cgroup.io | read, write | KiB/s |
| k8s.cgroup.serviced_ops | read, write | operations/s |
| k8s.cgroup.throttle_io | read, write | KiB/s |
| k8s.cgroup.throttle_serviced_ops | read, write | operations/s |
| k8s.cgroup.queued_ops | read, write | operations |
| k8s.cgroup.merged_ops | read, write | operations/s |
| k8s.cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| k8s.cgroup.cpu_some_pressure_stall_time | time | ms |
| k8s.cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| k8s.cgroup.cpu_full_pressure_stall_time | time | ms |
| k8s.cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| k8s.cgroup.memory_some_pressure_stall_time | time | ms |
| k8s.cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| k8s.cgroup.memory_full_pressure_stall_time | time | ms |
| k8s.cgroup.io_some_pressure | some10, some60, some300 | percentage |
| k8s.cgroup.io_some_pressure_stall_time | time | ms |
| k8s.cgroup.io_full_pressure | some10, some60, some300 | percentage |
| k8s.cgroup.io_full_pressure_stall_time | time | ms |
### Per k8s cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| device | TBD |
| interface_type | TBD |
| k8s_namespace | TBD |
| k8s_pod_name | TBD |
| k8s_pod_uid | TBD |
| k8s_controller_kind | TBD |
| k8s_controller_name | TBD |
| k8s_node_name | TBD |
| k8s_container_name | TBD |
| k8s_container_id | TBD |
| k8s_kind | TBD |
| k8s_qos_class | TBD |
| k8s_cluster_id | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| k8s.cgroup.net_net | received, sent | kilobits/s |
| k8s.cgroup.net_packets | received, sent, multicast | pps |
| k8s.cgroup.net_errors | inbound, outbound | errors/s |
| k8s.cgroup.net_drops | inbound, outbound | errors/s |
| k8s.cgroup.net_fifo | receive, transmit | errors/s |
| k8s.cgroup.net_compressed | receive, sent | pps |
| k8s.cgroup.net_events | frames, collisions, carrier | events/s |
| k8s.cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| k8s.cgroup.net_carrier | up, down | state |
| k8s.cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ k8s_cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | k8s.cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ k8s_cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | k8s.cgroup.mem_usage | cgroup memory utilization |
| [ k8s_cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | k8s.cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ k8s_cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | k8s.cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/libvirt_containers.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Libvirt Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Libvirt Containers
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Libvirt for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cpu_limit | used | percentage |
| cgroup.cpu | user, system | percentage |
| cgroup.cpu_per_core | a dimension per core | percentage |
| cgroup.throttled | throttled | percentage |
| cgroup.throttled_duration | duration | ms |
| cgroup.cpu_shares | shares | shares |
| cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| cgroup.writeback | dirty, writeback | MiB |
| cgroup.mem_activity | in, out | MiB/s |
| cgroup.pgfaults | pgfault, swap | MiB/s |
| cgroup.mem_usage | ram, swap | MiB |
| cgroup.mem_usage_limit | available, used | MiB |
| cgroup.mem_utilization | utilization | percentage |
| cgroup.mem_failcnt | failures | count |
| cgroup.io | read, write | KiB/s |
| cgroup.serviced_ops | read, write | operations/s |
| cgroup.throttle_io | read, write | KiB/s |
| cgroup.throttle_serviced_ops | read, write | operations/s |
| cgroup.queued_ops | read, write | operations |
| cgroup.merged_ops | read, write | operations/s |
| cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_some_pressure_stall_time | time | ms |
| cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_full_pressure_stall_time | time | ms |
| cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| cgroup.memory_some_pressure_stall_time | time | ms |
| cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| cgroup.memory_full_pressure_stall_time | time | ms |
| cgroup.io_some_pressure | some10, some60, some300 | percentage |
| cgroup.io_some_pressure_stall_time | time | ms |
| cgroup.io_full_pressure | some10, some60, some300 | percentage |
| cgroup.io_full_pressure_stall_time | time | ms |
### Per cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
| device | TBD |
| interface_type | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_net | received, sent | kilobits/s |
| cgroup.net_packets | received, sent, multicast | pps |
| cgroup.net_errors | inbound, outbound | errors/s |
| cgroup.net_drops | inbound, outbound | errors/s |
| cgroup.net_fifo | receive, transmit | errors/s |
| cgroup.net_compressed | receive, sent | pps |
| cgroup.net_events | frames, collisions, carrier | events/s |
| cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| cgroup.net_carrier | up, down | state |
| cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.mem_usage | cgroup memory utilization |
| [ cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/lxc_containers.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "LXC Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# LXC Containers
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor LXC Containers for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cpu_limit | used | percentage |
| cgroup.cpu | user, system | percentage |
| cgroup.cpu_per_core | a dimension per core | percentage |
| cgroup.throttled | throttled | percentage |
| cgroup.throttled_duration | duration | ms |
| cgroup.cpu_shares | shares | shares |
| cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| cgroup.writeback | dirty, writeback | MiB |
| cgroup.mem_activity | in, out | MiB/s |
| cgroup.pgfaults | pgfault, swap | MiB/s |
| cgroup.mem_usage | ram, swap | MiB |
| cgroup.mem_usage_limit | available, used | MiB |
| cgroup.mem_utilization | utilization | percentage |
| cgroup.mem_failcnt | failures | count |
| cgroup.io | read, write | KiB/s |
| cgroup.serviced_ops | read, write | operations/s |
| cgroup.throttle_io | read, write | KiB/s |
| cgroup.throttle_serviced_ops | read, write | operations/s |
| cgroup.queued_ops | read, write | operations |
| cgroup.merged_ops | read, write | operations/s |
| cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_some_pressure_stall_time | time | ms |
| cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_full_pressure_stall_time | time | ms |
| cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| cgroup.memory_some_pressure_stall_time | time | ms |
| cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| cgroup.memory_full_pressure_stall_time | time | ms |
| cgroup.io_some_pressure | some10, some60, some300 | percentage |
| cgroup.io_some_pressure_stall_time | time | ms |
| cgroup.io_full_pressure | some10, some60, some300 | percentage |
| cgroup.io_full_pressure_stall_time | time | ms |
### Per cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
| device | TBD |
| interface_type | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_net | received, sent | kilobits/s |
| cgroup.net_packets | received, sent, multicast | pps |
| cgroup.net_errors | inbound, outbound | errors/s |
| cgroup.net_drops | inbound, outbound | errors/s |
| cgroup.net_fifo | receive, transmit | errors/s |
| cgroup.net_compressed | receive, sent | pps |
| cgroup.net_events | frames, collisions, carrier | events/s |
| cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| cgroup.net_carrier | up, down | state |
| cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.mem_usage | cgroup memory utilization |
| [ cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/ovirt_containers.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "oVirt Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# oVirt Containers
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor oVirt for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cpu_limit | used | percentage |
| cgroup.cpu | user, system | percentage |
| cgroup.cpu_per_core | a dimension per core | percentage |
| cgroup.throttled | throttled | percentage |
| cgroup.throttled_duration | duration | ms |
| cgroup.cpu_shares | shares | shares |
| cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| cgroup.writeback | dirty, writeback | MiB |
| cgroup.mem_activity | in, out | MiB/s |
| cgroup.pgfaults | pgfault, swap | MiB/s |
| cgroup.mem_usage | ram, swap | MiB |
| cgroup.mem_usage_limit | available, used | MiB |
| cgroup.mem_utilization | utilization | percentage |
| cgroup.mem_failcnt | failures | count |
| cgroup.io | read, write | KiB/s |
| cgroup.serviced_ops | read, write | operations/s |
| cgroup.throttle_io | read, write | KiB/s |
| cgroup.throttle_serviced_ops | read, write | operations/s |
| cgroup.queued_ops | read, write | operations |
| cgroup.merged_ops | read, write | operations/s |
| cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_some_pressure_stall_time | time | ms |
| cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_full_pressure_stall_time | time | ms |
| cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| cgroup.memory_some_pressure_stall_time | time | ms |
| cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| cgroup.memory_full_pressure_stall_time | time | ms |
| cgroup.io_some_pressure | some10, some60, some300 | percentage |
| cgroup.io_some_pressure_stall_time | time | ms |
| cgroup.io_full_pressure | some10, some60, some300 | percentage |
| cgroup.io_full_pressure_stall_time | time | ms |
### Per cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
| device | TBD |
| interface_type | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_net | received, sent | kilobits/s |
| cgroup.net_packets | received, sent, multicast | pps |
| cgroup.net_errors | inbound, outbound | errors/s |
| cgroup.net_drops | inbound, outbound | errors/s |
| cgroup.net_fifo | receive, transmit | errors/s |
| cgroup.net_compressed | receive, sent | pps |
| cgroup.net_events | frames, collisions, carrier | events/s |
| cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| cgroup.net_carrier | up, down | state |
| cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.mem_usage | cgroup memory utilization |
| [ cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/proxmox_containers.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Proxmox Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Proxmox Containers
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Proxmox for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cpu_limit | used | percentage |
| cgroup.cpu | user, system | percentage |
| cgroup.cpu_per_core | a dimension per core | percentage |
| cgroup.throttled | throttled | percentage |
| cgroup.throttled_duration | duration | ms |
| cgroup.cpu_shares | shares | shares |
| cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| cgroup.writeback | dirty, writeback | MiB |
| cgroup.mem_activity | in, out | MiB/s |
| cgroup.pgfaults | pgfault, swap | MiB/s |
| cgroup.mem_usage | ram, swap | MiB |
| cgroup.mem_usage_limit | available, used | MiB |
| cgroup.mem_utilization | utilization | percentage |
| cgroup.mem_failcnt | failures | count |
| cgroup.io | read, write | KiB/s |
| cgroup.serviced_ops | read, write | operations/s |
| cgroup.throttle_io | read, write | KiB/s |
| cgroup.throttle_serviced_ops | read, write | operations/s |
| cgroup.queued_ops | read, write | operations |
| cgroup.merged_ops | read, write | operations/s |
| cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_some_pressure_stall_time | time | ms |
| cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_full_pressure_stall_time | time | ms |
| cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| cgroup.memory_some_pressure_stall_time | time | ms |
| cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| cgroup.memory_full_pressure_stall_time | time | ms |
| cgroup.io_some_pressure | some10, some60, some300 | percentage |
| cgroup.io_some_pressure_stall_time | time | ms |
| cgroup.io_full_pressure | some10, some60, some300 | percentage |
| cgroup.io_full_pressure_stall_time | time | ms |
### Per cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
| device | TBD |
| interface_type | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_net | received, sent | kilobits/s |
| cgroup.net_packets | received, sent, multicast | pps |
| cgroup.net_errors | inbound, outbound | errors/s |
| cgroup.net_drops | inbound, outbound | errors/s |
| cgroup.net_fifo | receive, transmit | errors/s |
| cgroup.net_compressed | receive, sent | pps |
| cgroup.net_events | frames, collisions, carrier | events/s |
| cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| cgroup.net_carrier | up, down | state |
| cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.mem_usage | cgroup memory utilization |
| [ cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,106 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/systemd_services.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Systemd Services"
learn_status: "Published"
learn_rel_path: "Data Collection/Systemd"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Systemd Services
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Containers for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per systemd service
Labels:
| Label | Description |
|:-----------|:----------------|
| service_name | Service name |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| systemd.service.cpu.utilization | user, system | percentage |
| systemd.service.memory.usage | ram, swap | MiB |
| systemd.service.memory.failcnt | fail | failures/s |
| systemd.service.memory.ram.usage | rss, cache, mapped_file, rss_huge | MiB |
| systemd.service.memory.writeback | writeback, dirty | MiB |
| systemd.service.memory.paging.faults | minor, major | MiB/s |
| systemd.service.memory.paging.io | in, out | MiB/s |
| systemd.service.disk.io | read, write | KiB/s |
| systemd.service.disk.iops | read, write | operations/s |
| systemd.service.disk.throttle.io | read, write | KiB/s |
| systemd.service.disk.throttle.iops | read, write | operations/s |
| systemd.service.disk.queued_iops | read, write | operations/s |
| systemd.service.disk.merged_iops | read, write | operations/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/integrations/virtual_machines.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Virtual Machines"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Virtual Machines
Plugin: cgroups.plugin
Module: /sys/fs/cgroup
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Virtual Machines for performance, resource usage, and health status.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cpu_limit | used | percentage |
| cgroup.cpu | user, system | percentage |
| cgroup.cpu_per_core | a dimension per core | percentage |
| cgroup.throttled | throttled | percentage |
| cgroup.throttled_duration | duration | ms |
| cgroup.cpu_shares | shares | shares |
| cgroup.mem | cache, rss, swap, rss_huge, mapped_file | MiB |
| cgroup.writeback | dirty, writeback | MiB |
| cgroup.mem_activity | in, out | MiB/s |
| cgroup.pgfaults | pgfault, swap | MiB/s |
| cgroup.mem_usage | ram, swap | MiB |
| cgroup.mem_usage_limit | available, used | MiB |
| cgroup.mem_utilization | utilization | percentage |
| cgroup.mem_failcnt | failures | count |
| cgroup.io | read, write | KiB/s |
| cgroup.serviced_ops | read, write | operations/s |
| cgroup.throttle_io | read, write | KiB/s |
| cgroup.throttle_serviced_ops | read, write | operations/s |
| cgroup.queued_ops | read, write | operations |
| cgroup.merged_ops | read, write | operations/s |
| cgroup.cpu_some_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_some_pressure_stall_time | time | ms |
| cgroup.cpu_full_pressure | some10, some60, some300 | percentage |
| cgroup.cpu_full_pressure_stall_time | time | ms |
| cgroup.memory_some_pressure | some10, some60, some300 | percentage |
| cgroup.memory_some_pressure_stall_time | time | ms |
| cgroup.memory_full_pressure | some10, some60, some300 | percentage |
| cgroup.memory_full_pressure_stall_time | time | ms |
| cgroup.io_some_pressure | some10, some60, some300 | percentage |
| cgroup.io_some_pressure_stall_time | time | ms |
| cgroup.io_full_pressure | some10, some60, some300 | percentage |
| cgroup.io_full_pressure_stall_time | time | ms |
### Per cgroup network device
Labels:
| Label | Description |
|:-----------|:----------------|
| container_name | TBD |
| image | TBD |
| device | TBD |
| interface_type | TBD |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_net | received, sent | kilobits/s |
| cgroup.net_packets | received, sent, multicast | pps |
| cgroup.net_errors | inbound, outbound | errors/s |
| cgroup.net_drops | inbound, outbound | errors/s |
| cgroup.net_fifo | receive, transmit | errors/s |
| cgroup.net_compressed | receive, sent | pps |
| cgroup.net_events | frames, collisions, carrier | events/s |
| cgroup.net_operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state |
| cgroup.net_carrier | up, down | state |
| cgroup.net_mtu | mtu | octets |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ cgroup_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.cpu_limit | average cgroup CPU utilization over the last 10 minutes |
| [ cgroup_ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.mem_usage | cgroup memory utilization |
| [ cgroup_1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ cgroup_10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/cgroups.conf) | cgroup.net_packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -1,104 +0,0 @@
<!--
title: "Access point monitoring with Netdata"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/ap/README.md"
sidebar_label: "Access points"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Remotes/Devices"
-->
# Access point collector
The `ap` collector visualizes data related to access points.
## Example Netdata charts
![image](https://cloud.githubusercontent.com/assets/2662304/12377654/9f566e88-bd2d-11e5-855a-e0ba96b8fd98.png)
## How it works
It does the following:
1. Runs `iw dev` searching for interfaces that have `type AP`.
From the same output it collects the SSIDs each AP supports by looking for lines `ssid NAME`.
Example:
```sh
# iw dev
phy#0
Interface wlan0
ifindex 3
wdev 0x1
addr 7c:dd:90:77:34:2a
ssid TSAOUSIS
type AP
channel 7 (2442 MHz), width: 20 MHz, center1: 2442 MHz
```
2. For each interface found, it runs `iw INTERFACE station dump`.
From the output is collects:
- rx/tx bytes
- rx/tx packets
- tx retries
- tx failed
- signal strength
- rx/tx bitrate
- expected throughput
Example:
```sh
# iw wlan0 station dump
Station 40:b8:37:5a:ed:5e (on wlan0)
inactive time: 910 ms
rx bytes: 15588897
rx packets: 127772
tx bytes: 52257763
tx packets: 95802
tx retries: 2162
tx failed: 28
signal: -43 dBm
signal avg: -43 dBm
tx bitrate: 65.0 MBit/s MCS 7
rx bitrate: 1.0 MBit/s
expected throughput: 32.125Mbps
authorized: yes
authenticated: yes
preamble: long
WMM/WME: yes
MFP: no
TDLS peer: no
```
3. For each interface found, it creates 6 charts:
- Number of Connected clients
- Bandwidth for all clients
- Packets for all clients
- Transmit Issues for all clients
- Average Signal among all clients
- Average Bitrate (including average expected throughput) among all clients
## Configuration
If using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), make sure `netdata-plugin-chartsd` is installed.
Edit the `charts.d/ap.conf` configuration file using `edit-config` from the Netdata [config
directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d/ap.conf
```
You can only set `ap_update_every=NUMBER` to change the data collection frequency.
## Auto-detection
The plugin is able to auto-detect if you are running access points on your linux box.

View File

@ -0,0 +1 @@
integrations/access_points.md

View File

@ -0,0 +1,169 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/ap/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/ap/metadata.yaml"
sidebar_label: "Access Points"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Network"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Access Points
Plugin: charts.d.plugin
Module: ap
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
The ap collector visualizes data related to wireless access points.
It uses the `iw` command line utility to detect access points. For each interface that is of `type AP`, it then runs `iw INTERFACE station dump` and collects statistics.
This collector is only supported on the following platforms:
- Linux
This collector only supports collecting metrics from a single instance of this integration.
### Default Behavior
#### Auto-Detection
The plugin is able to auto-detect if you are running access points on your linux box.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per wireless device
These metrics refer to the entire monitored application.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ap.clients | clients | clients |
| ap.net | received, sent | kilobits/s |
| ap.packets | received, sent | packets/s |
| ap.issues | retries, failures | issues/s |
| ap.signal | average signal | dBm |
| ap.bitrate | receive, transmit, expected | Mbps |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Install charts.d plugin
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
#### `iw` utility.
Make sure the `iw` utility is installed.
### Configuration
#### File
The configuration file name for this integration is `charts.d/ap.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config charts.d/ap.conf
```
#### Options
The config file is sourced by the charts.d plugin. It's a standard bash file.
The following collapsed table contains all the options that can be configured for the ap collector.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| ap_update_every | The data collection frequency. If unset, will inherit the netdata update frequency. | | False |
| ap_priority | Controls the order of charts at the netdata dashboard. | | False |
| ap_retries | The number of retries to do in case of failure before disabling the collector. | | False |
</details>
#### Examples
##### Change the collection frequency
Specify a custom collection frequence (update_every) for this collector
```yaml
# the data collection frequency
# if unset, will inherit the netdata update frequency
ap_update_every=10
# the charts priority on the dashboard
#ap_priority=6900
# the number of retries to do in case of failure
# before disabling the module
#ap_retries=10
```
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `ap` collector, run the `charts.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `charts.d.plugin` to debug the collector:
```bash
./charts.d.plugin debug 1 ap
```

View File

@ -1,26 +0,0 @@
<!--
title: "APC UPS monitoring with Netdata"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/apcupsd/README.md"
sidebar_label: "APC UPS"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Remotes/Devices"
-->
# APC UPS collector
Monitors different APC UPS models and retrieves status information using `apcaccess` tool.
## Configuration
If using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), make sure `netdata-plugin-chartsd` is installed.
Edit the `charts.d/apcupsd.conf` configuration file using `edit-config` from the Netdata [config
directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d/apcupsd.conf
```

View File

@ -0,0 +1 @@
integrations/apc_ups.md

View File

@ -0,0 +1,189 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/apcupsd/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/apcupsd/metadata.yaml"
sidebar_label: "APC UPS"
learn_status: "Published"
learn_rel_path: "Data Collection/UPS"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# APC UPS
Plugin: charts.d.plugin
Module: apcupsd
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor APC UPS performance with Netdata for optimal uninterruptible power supply operations. Enhance your power supply reliability with real-time APC UPS metrics.
The collector uses the `apcaccess` tool to contact the `apcupsd` daemon and get the APC UPS statistics.
This collector is supported on all platforms.
This collector only supports collecting metrics from a single instance of this integration.
### Default Behavior
#### Auto-Detection
By default, with no configuration provided, the collector will try to contact 127.0.0.1:3551 with using the `apcaccess` utility.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per ups
Metrics related to UPS. Each UPS provides its own set of the following metrics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apcupsd.charge | charge | percentage |
| apcupsd.battery.voltage | voltage, nominal | Volts |
| apcupsd.input.voltage | voltage, min, max | Volts |
| apcupsd.output.voltage | absolute, nominal | Volts |
| apcupsd.input.frequency | frequency | Hz |
| apcupsd.load | load | percentage |
| apcupsd.load_usage | load | Watts |
| apcupsd.temperature | temp | Celsius |
| apcupsd.time | time | Minutes |
| apcupsd.online | online | boolean |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ apcupsd_ups_charge ](https://github.com/netdata/netdata/blob/master/health/health.d/apcupsd.conf) | apcupsd.charge | average UPS charge over the last minute |
| [ apcupsd_10min_ups_load ](https://github.com/netdata/netdata/blob/master/health/health.d/apcupsd.conf) | apcupsd.load | average UPS load over the last 10 minutes |
| [ apcupsd_last_collected_secs ](https://github.com/netdata/netdata/blob/master/health/health.d/apcupsd.conf) | apcupsd.load | number of seconds since the last successful data collection |
## Setup
### Prerequisites
#### Install charts.d plugin
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
#### Required software
Make sure the `apcaccess` and `apcupsd` are installed and running.
### Configuration
#### File
The configuration file name for this integration is `charts.d/apcupsd.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config charts.d/apcupsd.conf
```
#### Options
The config file is sourced by the charts.d plugin. It's a standard bash file.
The following collapsed table contains all the options that can be configured for the apcupsd collector.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| apcupsd_sources | This is an array of apcupsd sources. You can have multiple entries there. Please refer to the example below on how to set it. | | False |
| apcupsd_timeout | How long to wait for apcupsd to respond. | | False |
| apcupsd_update_every | The data collection frequency. If unset, will inherit the netdata update frequency. | | False |
| apcupsd_priority | The charts priority on the dashboard. | | False |
| apcupsd_retries | The number of retries to do in case of failure before disabling the collector. | | False |
</details>
#### Examples
##### Multiple apcupsd sources
Specify a multiple apcupsd sources along with a custom update interval
```yaml
# add all your APC UPSes in this array - uncomment it too
declare -A apcupsd_sources=(
["local"]="127.0.0.1:3551",
["remote"]="1.2.3.4:3551"
)
# how long to wait for apcupsd to respond
#apcupsd_timeout=3
# the data collection frequency
# if unset, will inherit the netdata update frequency
apcupsd_update_every=5
# the charts priority on the dashboard
#apcupsd_priority=90000
# the number of retries to do in case of failure
# before disabling the module
#apcupsd_retries=10
```
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `apcupsd` collector, run the `charts.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `charts.d.plugin` to debug the collector:
```bash
./charts.d.plugin debug 1 apcupsd
```

View File

@ -1,61 +0,0 @@
<!--
title: "Libreswan IPSec tunnel monitoring with Netdata"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/libreswan/README.md"
sidebar_label: "Libreswan IPSec tunnels"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Networking"
-->
# Libreswan IPSec tunnel collector
Collects bytes-in, bytes-out and uptime for all established libreswan IPSEC tunnels.
The following charts are created, **per tunnel**:
1. **Uptime**
- the uptime of the tunnel
2. **Traffic**
- bytes in
- bytes out
## Configuration
If using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), make sure `netdata-plugin-chartsd` is installed.
Edit the `charts.d/libreswan.conf` configuration file using `edit-config` from the Netdata [config
directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d/libreswan.conf
```
The plugin executes 2 commands to collect all the information it needs:
```sh
ipsec whack --status
ipsec whack --trafficstatus
```
The first command is used to extract the currently established tunnels, their IDs and their names.
The second command is used to extract the current uptime and traffic.
Most probably user `netdata` will not be able to query libreswan, so the `ipsec` commands will be denied.
The plugin attempts to run `ipsec` as `sudo ipsec ...`, to get access to libreswan statistics.
To allow user `netdata` execute `sudo ipsec ...`, create the file `/etc/sudoers.d/netdata` with this content:
```
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --status
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --trafficstatus
```
Make sure the path `/sbin/ipsec` matches your setup (execute `which ipsec` to find the right path).
---

View File

@ -0,0 +1 @@
integrations/libreswan.md

View File

@ -0,0 +1,189 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/libreswan/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/libreswan/metadata.yaml"
sidebar_label: "Libreswan"
learn_status: "Published"
learn_rel_path: "Data Collection/VPNs"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Libreswan
Plugin: charts.d.plugin
Module: libreswan
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Libreswan performance for optimal IPsec VPN operations. Improve your VPN operations with Netdata''s real-time metrics and built-in alerts.
The collector uses the `ipsec` command to collect the information it needs.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per IPSEC tunnel
Metrics related to IPSEC tunnels. Each tunnel provides its own set of the following metrics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| libreswan.net | in, out | kilobits/s |
| libreswan.uptime | uptime | seconds |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Install charts.d plugin
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
#### Permissions to execute `ipsec`
The plugin executes 2 commands to collect all the information it needs:
```sh
ipsec whack --status
ipsec whack --trafficstatus
```
The first command is used to extract the currently established tunnels, their IDs and their names.
The second command is used to extract the current uptime and traffic.
Most probably user `netdata` will not be able to query libreswan, so the `ipsec` commands will be denied.
The plugin attempts to run `ipsec` as `sudo ipsec ...`, to get access to libreswan statistics.
To allow user `netdata` execute `sudo ipsec ...`, create the file `/etc/sudoers.d/netdata` with this content:
```
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --status
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --trafficstatus
```
Make sure the path `/sbin/ipsec` matches your setup (execute `which ipsec` to find the right path).
### Configuration
#### File
The configuration file name for this integration is `charts.d/libreswan.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config charts.d/libreswan.conf
```
#### Options
The config file is sourced by the charts.d plugin. It's a standard bash file.
The following collapsed table contains all the options that can be configured for the libreswan collector.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| libreswan_update_every | The data collection frequency. If unset, will inherit the netdata update frequency. | | False |
| libreswan_priority | The charts priority on the dashboard | | False |
| libreswan_retries | The number of retries to do in case of failure before disabling the collector. | | False |
| libreswan_sudo | Whether to run `ipsec` with `sudo` or not. | | False |
</details>
#### Examples
##### Run `ipsec` without sudo
Run the `ipsec` utility without sudo
```yaml
# the data collection frequency
# if unset, will inherit the netdata update frequency
#libreswan_update_every=1
# the charts priority on the dashboard
#libreswan_priority=90000
# the number of retries to do in case of failure
# before disabling the module
#libreswan_retries=10
# set to 1, to run ipsec with sudo (the default)
# set to 0, to run ipsec without sudo
libreswan_sudo=0
```
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `libreswan` collector, run the `charts.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `charts.d.plugin` to debug the collector:
```bash
./charts.d.plugin debug 1 libreswan
```

View File

@ -1,79 +0,0 @@
<!--
title: "UPS/PDU monitoring with Netdata"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/nut/README.md"
sidebar_label: "UPS/PDU"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Remotes/Devices"
-->
# UPS/PDU collector
Collects UPS data for all power devices configured in the system.
The following charts will be created:
1. **UPS Charge**
- percentage changed
2. **UPS Battery Voltage**
- current voltage
- high voltage
- low voltage
- nominal voltage
3. **UPS Input Voltage**
- current voltage
- fault voltage
- nominal voltage
4. **UPS Input Current**
- nominal current
5. **UPS Input Frequency**
- current frequency
- nominal frequency
6. **UPS Output Voltage**
- current voltage
7. **UPS Load**
- current load
8. **UPS Temperature**
- current temperature
## Configuration
If using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), make sure `netdata-plugin-chartsd` is installed.
Edit the `charts.d/nut.conf` configuration file using `edit-config` from the Netdata [config
directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d/nut.conf
```
This is the internal default for `charts.d/nut.conf`
```sh
# a space separated list of UPS names
# if empty, the list returned by 'upsc -l' will be used
nut_ups=
# how frequently to collect UPS data
nut_update_every=2
```
---

View File

@ -0,0 +1 @@
integrations/network_ups_tools_nut.md

View File

@ -0,0 +1,203 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/nut/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/nut/metadata.yaml"
sidebar_label: "Network UPS Tools (NUT)"
learn_status: "Published"
learn_rel_path: "Data Collection/UPS"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Network UPS Tools (NUT)
Plugin: charts.d.plugin
Module: nut
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Examine UPS/PDU metrics with Netdata for insights into power device performance. Improve your power device performance with comprehensive dashboards and anomaly detection.
This collector uses the `nut` (Network UPS Tools) to query statistics for multiple UPS devices.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per ups
Metrics related to UPS. Each UPS provides its own set of the following metrics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| nut.charge | charge | percentage |
| nut.runtime | runtime | seconds |
| nut.battery.voltage | voltage, high, low, nominal | Volts |
| nut.input.voltage | voltage, fault, nominal | Volts |
| nut.input.current | nominal | Ampere |
| nut.input.frequency | frequency, nominal | Hz |
| nut.output.voltage | voltage | Volts |
| nut.load | load | percentage |
| nut.load_usage | load_usage | Watts |
| nut.temperature | temp | temperature |
| nut.clients | clients | clients |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ nut_ups_charge ](https://github.com/netdata/netdata/blob/master/health/health.d/nut.conf) | nut.charge | average UPS charge over the last minute |
| [ nut_10min_ups_load ](https://github.com/netdata/netdata/blob/master/health/health.d/nut.conf) | nut.load | average UPS load over the last 10 minutes |
| [ nut_last_collected_secs ](https://github.com/netdata/netdata/blob/master/health/health.d/nut.conf) | nut.load | number of seconds since the last successful data collection |
## Setup
### Prerequisites
#### Install charts.d plugin
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
#### Required software
Make sure the Network UPS Tools (`nut`) is installed and can detect your UPS devices.
### Configuration
#### File
The configuration file name for this integration is `charts.d/nut.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config charts.d/nut.conf
```
#### Options
The config file is sourced by the charts.d plugin. It's a standard bash file.
The following collapsed table contains all the options that can be configured for the nut collector.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| nut_ups | A space separated list of UPS names. If empty, the list returned by `upsc -l` will be used. | | False |
| nut_names | Each line represents an alias for one UPS. If empty, the FQDN will be used. | | False |
| nut_timeout | How long to wait for nut to respond. | | False |
| nut_clients_chart | Set this to 1 to enable another chart showing the number of UPS clients connected to `upsd`. | | False |
| nut_update_every | The data collection frequency. If unset, will inherit the netdata update frequency. | | False |
| nut_priority | The charts priority on the dashboard | | False |
| nut_retries | The number of retries to do in case of failure before disabling the collector. | | False |
</details>
#### Examples
##### Provide names to UPS devices
Map aliases to UPS devices
<details><summary>Config</summary>
```yaml
# a space separated list of UPS names
# if empty, the list returned by 'upsc -l' will be used
#nut_ups=
# each line represents an alias for one UPS
# if empty, the FQDN will be used
nut_names["XXXXXX"]="UPS-office"
nut_names["YYYYYY"]="UPS-rack"
# how much time in seconds, to wait for nut to respond
#nut_timeout=2
# set this to 1, to enable another chart showing the number
# of UPS clients connected to upsd
#nut_clients_chart=1
# the data collection frequency
# if unset, will inherit the netdata update frequency
#nut_update_every=2
# the charts priority on the dashboard
#nut_priority=90000
# the number of retries to do in case of failure
# before disabling the module
#nut_retries=10
```
</details>
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `nut` collector, run the `charts.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `charts.d.plugin` to debug the collector:
```bash
./charts.d.plugin debug 1 nut
```

View File

@ -1,24 +0,0 @@
<!--
title: "OpenSIPS monitoring with Netdata"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/opensips/README.md"
sidebar_label: "OpenSIPS"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Networking"
-->
# OpenSIPS collector
## Configuration
If using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), make sure `netdata-plugin-chartsd` is installed.
Edit the `charts.d/opensips.conf` configuration file using `edit-config` from the Netdata [config
directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d/opensips.conf
```

View File

@ -0,0 +1 @@
integrations/opensips.md

View File

@ -0,0 +1,187 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/opensips/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/opensips/metadata.yaml"
sidebar_label: "OpenSIPS"
learn_status: "Published"
learn_rel_path: "Data Collection/Telephony Servers"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# OpenSIPS
Plugin: charts.d.plugin
Module: opensips
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Examine OpenSIPS metrics for insights into SIP server operations. Study call rates, error rates, and response times for reliable voice over IP services.
The collector uses the `opensipsctl` command line utility to gather OpenSIPS metrics.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
The collector will attempt to call `opensipsctl` along with a default number of parameters, even without any configuration.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per OpenSIPS instance
These metrics refer to the entire monitored application.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| opensips.dialogs_active | active, early | dialogs |
| opensips.users | registered, location, contacts, expires | users |
| opensips.registrar | accepted, rejected | registrations/s |
| opensips.transactions | UAS, UAC | transactions/s |
| opensips.core_rcv | requests, replies | queries/s |
| opensips.core_fwd | requests, replies | queries/s |
| opensips.core_drop | requests, replies | queries/s |
| opensips.core_err | requests, replies | queries/s |
| opensips.core_bad | bad_URIs_rcvd, unsupported_methods, bad_msg_hdr | queries/s |
| opensips.tm_replies | received, relayed, local | replies/s |
| opensips.transactions_status | 2xx, 3xx, 4xx, 5xx, 6xx | transactions/s |
| opensips.transactions_inuse | inuse | transactions |
| opensips.sl_replies | 1xx, 2xx, 3xx, 4xx, 5xx, 6xx, sent, error, ACKed | replies/s |
| opensips.dialogs | processed, expire, failed | dialogs/s |
| opensips.net_waiting | UDP, TCP | kilobytes |
| opensips.uri_checks | positive, negative | checks / sec |
| opensips.traces | requests, replies | traces / sec |
| opensips.shmem | total, used, real_used, max_used, free | kilobytes |
| opensips.shmem_fragment | fragments | fragments |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Install charts.d plugin
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
#### Required software
The collector requires the `opensipsctl` to be installed.
### Configuration
#### File
The configuration file name for this integration is `charts.d/opensips.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config charts.d/opensips.conf
```
#### Options
The config file is sourced by the charts.d plugin. It's a standard bash file.
The following collapsed table contains all the options that can be configured for the opensips collector.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| opensips_opts | Specify parameters to the `opensipsctl` command. If the default value fails to get global status, set here whatever options are needed to connect to the opensips server. | | False |
| opensips_cmd | If `opensipsctl` is not in $PATH, specify it's full path here. | | False |
| opensips_timeout | How long to wait for `opensipsctl` to respond. | | False |
| opensips_update_every | The data collection frequency. If unset, will inherit the netdata update frequency. | | False |
| opensips_priority | The charts priority on the dashboard. | | False |
| opensips_retries | The number of retries to do in case of failure before disabling the collector. | | False |
</details>
#### Examples
##### Custom `opensipsctl` command
Set a custom path to the `opensipsctl` command
```yaml
#opensips_opts="fifo get_statistics all"
opensips_cmd=/opt/opensips/bin/opensipsctl
#opensips_timeout=2
# the data collection frequency
# if unset, will inherit the netdata update frequency
#opensips_update_every=5
# the charts priority on the dashboard
#opensips_priority=80000
# the number of retries to do in case of failure
# before disabling the module
#opensips_retries=10
```
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `opensips` collector, run the `charts.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `charts.d.plugin` to debug the collector:
```bash
./charts.d.plugin debug 1 opensips
```

View File

@ -1,81 +0,0 @@
# Linux machine sensors collector
Use this collector when `lm-sensors` doesn't work on your device (e.g. for RPi temperatures).
For all other cases use the [Python collector](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/sensors), which supports multiple
jobs, is more efficient and performs calculations on top of the kernel provided values.
This plugin will provide charts for all configured system sensors, by reading sensors directly from the kernel.
The values graphed are the raw hardware values of the sensors.
The plugin will create Netdata charts for:
1. **Temperature**
2. **Voltage**
3. **Current**
4. **Power**
5. **Fans Speed**
6. **Energy**
7. **Humidity**
One chart for every sensor chip found and each of the above will be created.
## Enable the collector
If using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), make sure `netdata-plugin-chartsd` is installed.
The `sensors` collector is disabled by default.
To enable the collector, you need to edit the configuration file of `charts.d/sensors.conf`. You can do so by using the `edit config` script.
> ### Info
>
> To edit configuration files in a safe way, we provide the [`edit config` script](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) located in your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory) (typically is `/etc/netdata`) that creates the proper file and opens it in an editor automatically.
> It is recommended to use this way for configuring Netdata.
>
> Please also note that after most configuration changes you will need to [restart the Agent](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for the changes to take effect.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d.conf
```
You need to uncomment the regarding `sensors`, and set the value to `force`.
```shell
# example=force
sensors=force
```
## Configuration
Edit the `charts.d/sensors.conf` configuration file using `edit-config`:
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d/sensors.conf
```
This is the internal default for `charts.d/sensors.conf`
```sh
# the directory the kernel keeps sensor data
sensors_sys_dir="${NETDATA_HOST_PREFIX}/sys/devices"
# how deep in the tree to check for sensor data
sensors_sys_depth=10
# if set to 1, the script will overwrite internal
# script functions with code generated ones
# leave to 1, is faster
sensors_source_update=1
# how frequently to collect sensor data
# the default is to collect it at every iteration of charts.d
sensors_update_every=
# array of sensors which are excluded
# the default is to include all
sensors_excluded=()
```
---

View File

@ -0,0 +1 @@
integrations/linux_sensors_sysfs.md

View File

@ -0,0 +1,196 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/sensors/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/charts.d.plugin/sensors/metadata.yaml"
sidebar_label: "Linux Sensors (sysfs)"
learn_status: "Published"
learn_rel_path: "Data Collection/Hardware Devices and Sensors"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Linux Sensors (sysfs)
Plugin: charts.d.plugin
Module: sensors
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Use this collector when `lm-sensors` doesn't work on your device (e.g. for RPi temperatures).
For all other cases use the [Python collector](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/sensors), which supports multiple jobs, is more efficient and performs calculations on top of the kernel provided values."
It will provide charts for all configured system sensors, by reading sensors directly from the kernel.
The values graphed are the raw hardware values of the sensors.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
By default, the collector will try to read entries under `/sys/devices`
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per sensor chip
Metrics related to sensor chips. Each chip provides its own set of the following metrics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| sensors.temp | {filename} | Celsius |
| sensors.volt | {filename} | Volts |
| sensors.curr | {filename} | Ampere |
| sensors.power | {filename} | Watt |
| sensors.fans | {filename} | Rotations / Minute |
| sensors.energy | {filename} | Joule |
| sensors.humidity | {filename} | Percent |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Install charts.d plugin
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
#### Enable the sensors collector
The `sensors` collector is disabled by default. To enable it, use `edit-config` from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `charts.d.conf` file.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config charts.d.conf
```
Change the value of the `sensors` setting to `force` and uncomment the line. Save the file and restart the Netdata Agent with `sudo systemctl restart netdata`, or the [appropriate method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system.
### Configuration
#### File
The configuration file name for this integration is `charts.d/sensors.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config charts.d/sensors.conf
```
#### Options
The config file is sourced by the charts.d plugin. It's a standard bash file.
The following collapsed table contains all the options that can be configured for the sensors collector.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| sensors_sys_dir | The directory the kernel exposes sensor data. | | False |
| sensors_sys_depth | How deep in the tree to check for sensor data. | | False |
| sensors_source_update | If set to 1, the script will overwrite internal script functions with code generated ones. | | False |
| sensors_update_every | The data collection frequency. If unset, will inherit the netdata update frequency. | | False |
| sensors_priority | The charts priority on the dashboard. | | False |
| sensors_retries | The number of retries to do in case of failure before disabling the collector. | | False |
</details>
#### Examples
##### Set sensors path depth
Set a different sensors path depth
```yaml
# the directory the kernel keeps sensor data
#sensors_sys_dir="/sys/devices"
# how deep in the tree to check for sensor data
sensors_sys_depth=5
# if set to 1, the script will overwrite internal
# script functions with code generated ones
# leave to 1, is faster
#sensors_source_update=1
# the data collection frequency
# if unset, will inherit the netdata update frequency
#sensors_update_every=
# the charts priority on the dashboard
#sensors_priority=90000
# the number of retries to do in case of failure
# before disabling the module
#sensors_retries=10
```
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `sensors` collector, run the `charts.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `charts.d.plugin` to debug the collector:
```bash
./charts.d.plugin debug 1 sensors
```

View File

@ -1,68 +0,0 @@
<!--
title: "Printers (cups.plugin)"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cups.plugin/README.md"
sidebar_label: "cups.plugin"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Remotes/Devices"
-->
# Printers (cups.plugin)
`cups.plugin` collects Common Unix Printing System (CUPS) metrics.
## Prerequisites
This plugin needs a running local CUPS daemon (`cupsd`). This plugin does not need any configuration. Supports cups since version 1.7.
If you installed Netdata using our native packages, you will have to additionally install `netdata-plugin-cups` to use this plugin for data collection. It is not installed by default due to the large number of dependencies it requires.
## Charts
`cups.plugin` provides one common section `destinations` and one section per destination.
> Destinations in CUPS represent individual printers or classes (collections or pools) of printers (<https://www.cups.org/doc/cupspm.html#working-with-destinations>)
The section `server` provides these charts:
1. **destinations by state**
- idle
- printing
- stopped
2. **destinations by options**
- total
- accepting jobs
- shared
3. **total job number by status**
- pending
- processing
- held
4. **total job size by status**
- pending
- processing
- held
For each destination the plugin provides these charts:
1. **job number by status**
- pending
- held
- processing
2. **job size by status**
- pending
- held
- processing
At the moment only job status pending, processing, and held are reported because we do not have a method to collect stopped, canceled, aborted and completed jobs which scales.

View File

@ -0,0 +1 @@
integrations/cups.md

View File

@ -0,0 +1,136 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/cups.plugin/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/cups.plugin/metadata.yaml"
sidebar_label: "CUPS"
learn_status: "Published"
learn_rel_path: "Data Collection/Hardware Devices and Sensors"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# CUPS
Plugin: cups.plugin
Module: cups.plugin
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor CUPS performance for achieving optimal printing system operations. Monitor job statuses, queue lengths, and error rates to ensure smooth printing tasks.
The plugin uses CUPS shared library to connect and monitor the server.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs to access the server. Netdata sets permissions during installation time to reach the server through its library.
### Default Behavior
#### Auto-Detection
The plugin detects when CUPS server is running and tries to connect to it.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per CUPS instance
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cups.dests_state | idle, printing, stopped | dests |
| cups.dests_option | total, acceptingjobs, shared | dests |
| cups.job_num | pending, held, processing | jobs |
| cups.job_size | pending, held, processing | KB |
### Per destination
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cups.destination_job_num | pending, held, processing | jobs |
| cups.destination_job_size | pending, held, processing | KB |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Minimum setup
The CUPS server must be installed and running. If you installed `netdata` using a package manager, it is also necessary to install the package `netdata-plugin-cups`.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:cups]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| command options | Additional parameters for the collector | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,133 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/debugfs.plugin/integrations/linux_zswap.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/debugfs.plugin/metadata.yaml"
sidebar_label: "Linux ZSwap"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Memory"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Linux ZSwap
Plugin: debugfs.plugin
Module: /sys/kernel/debug/zswap
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collects zswap performance metrics on Linux systems.
Parse data from `debugfs file.
This collector is only supported on the following platforms:
- Linux
This collector only supports collecting metrics from a single instance of this integration.
This integration requires read access to files under `/sys/kernel/debug/zswap`, which are accessible only to the root user by default. Netdata uses Linux Capabilities to give the plugin access to debugfs. `CAP_DAC_READ_SEARCH` is added automatically during installation. This capability allows bypassing file read permission checks and directory read and execute permission checks. If file capabilities are not usable, then the plugin is instead installed with the SUID bit set in permissions so that it runs as root.
### Default Behavior
#### Auto-Detection
Assuming that debugfs is mounted and the required permissions are available, this integration will automatically detect whether or not the system is using zswap.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
Monitor the performance statistics of zswap.
### Per Linux ZSwap instance
Global zswap performance metrics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.zswap_pool_compression_ratio | compression_ratio | ratio |
| system.zswap_pool_compressed_size | compressed_size | bytes |
| system.zswap_pool_raw_size | uncompressed_size | bytes |
| system.zswap_rejections | compress_poor, kmemcache_fail, alloc_fail, reclaim_fail | rejections/s |
| system.zswap_pool_limit_hit | limit | events/s |
| system.zswap_written_back_raw_bytes | written_back | bytes/s |
| system.zswap_same_filled_raw_size | same_filled | bytes |
| system.zswap_duplicate_entry | duplicate | entries/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### filesystem
The debugfs filesystem must be mounted on your host for plugin to collect data. You can run the command-line (`sudo mount -t debugfs none /sys/kernel/debug/`) to mount it locally. It is also recommended to modify your fstab (5) avoiding necessity to mount the filesystem before starting netdata.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:debugfs]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| command options | Additinal parameters for collector | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,127 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/debugfs.plugin/integrations/power_capping.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/debugfs.plugin/metadata.yaml"
sidebar_label: "Power Capping"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Kernel"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Power Capping
Plugin: debugfs.plugin
Module: intel_rapl
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collects power capping performance metrics on Linux systems.
Parse data from `debugfs file.
This collector is only supported on the following platforms:
- Linux
This collector only supports collecting metrics from a single instance of this integration.
This integration requires read access to files under `/sys/devices/virtual/powercap`, which are accessible only to the root user by default. Netdata uses Linux Capabilities to give the plugin access to debugfs. `CAP_DAC_READ_SEARCH` is added automatically during installation. This capability allows bypassing file read permission checks and directory read and execute permission checks. If file capabilities are not usable, then the plugin is instead installed with the SUID bit set in permissions so that it runs as root.
### Default Behavior
#### Auto-Detection
Assuming that debugfs is mounted and the required permissions are available, this integration will automatically detect whether or not the system is using zswap.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
Monitor the Intel RAPL zones Consumption.
### Per Power Capping instance
Global Intel RAPL zones.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cpu.powercap_intel_rapl_zone | Power | Watts |
| cpu.powercap_intel_rapl_subzones | dram, core, uncore | Watts |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### filesystem
The debugfs filesystem must be mounted on your host for plugin to collect data. You can run the command-line (`sudo mount -t debugfs none /sys/kernel/debug/`) to mount it locally. It is also recommended to modify your fstab (5) avoiding necessity to mount the filesystem before starting netdata.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:debugfs]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| command options | Additinal parameters for collector | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,131 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/debugfs.plugin/integrations/system_memory_fragmentation.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/debugfs.plugin/metadata.yaml"
sidebar_label: "System Memory Fragmentation"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Memory"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# System Memory Fragmentation
Plugin: debugfs.plugin
Module: /sys/kernel/debug/extfrag
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collects memory fragmentation statistics from the Linux kernel
Parse data from `debugfs` file
This collector is only supported on the following platforms:
- Linux
This collector only supports collecting metrics from a single instance of this integration.
This integration requires read access to files under `/sys/kernel/debug/extfrag`, which are accessible only to the root user by default. Netdata uses Linux Capabilities to give the plugin access to debugfs. `CAP_DAC_READ_SEARCH` is added automatically during installation. This capability allows bypassing file read permission checks and directory read and execute permission checks. If file capabilities are not usable, then the plugin is instead installed with the SUID bit set in permissions so that it runs as root.
### Default Behavior
#### Auto-Detection
Assuming that debugfs is mounted and the required permissions are available, this integration will automatically run by default.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
Monitor the overall memory fragmentation of the system.
### Per node
Memory fragmentation statistics for each NUMA node in the system.
Labels:
| Label | Description |
|:-----------|:----------------|
| numa_node | The NUMA node the metrics are associated with. |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.fragmentation_index_dma | order0, order1, order2, order3, order4, order5, order6, order7, order8, order9, order10 | index |
| mem.fragmentation_index_dma32 | order0, order1, order2, order3, order4, order5, order6, order7, order8, order9, order10 | index |
| mem.fragmentation_index_normal | order0, order1, order2, order3, order4, order5, order6, order7, order8, order9, order10 | index |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### filesystem
The debugfs filesystem must be mounted on your host for plugin to collect data. You can run the command-line (`sudo mount -t debugfs none /sys/kernel/debug/`) to mount it locally. It is also recommended to modify your fstab (5) avoiding necessity to mount the filesystem before starting netdata.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:debugfs]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| command options | Additinal parameters for collector | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -1,55 +0,0 @@
# Monitor disk (diskspace.plugin)
This plugin monitors the disk space usage of mounted disks, under Linux. The plugin requires Netdata to have execute/search permissions on the mount point itself, as well as each component of the absolute path to the mount point.
Two charts are available for every mount:
- Disk Space Usage
- Disk Files (inodes) Usage
## configuration
Simple patterns can be used to exclude mounts from showed statistics based on path or filesystem. By default read-only mounts are not displayed. To display them `yes` should be set for a chart instead of `auto`.
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though).
Netdata will try to detect mounts that are duplicates (i.e. from the same device), or binds, and will not display charts for them, as the device is usually already monitored.
To configure this plugin, you need to edit the configuration file `netdata.conf`. You can do so by using the `edit config` script.
> ### Info
>
> To edit configuration files in a safe way, we provide the [`edit config` script](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) located in your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory) (typically is `/etc/netdata`) that creates the proper file and opens it in an editor automatically.
> It is recommended to use this way for configuring Netdata.
>
> Please also note that after most configuration changes you will need to [restart the Agent](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for the changes to take effect.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config netdata.conf
```
You can enable the effect of each line by uncommenting it.
You can set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
```conf
[plugin:proc:diskspace]
# remove charts of unmounted disks = yes
# update every = 1
# check for new mount points every = 15
# exclude space metrics on paths = /proc/* /sys/* /var/run/user/* /run/user/* /snap/* /var/lib/docker/*
# exclude space metrics on filesystems = *gvfs *gluster* *s3fs *ipfs *davfs2 *httpfs *sshfs *gdfs *moosefs fusectl autofs
# space usage for all disks = auto
# inodes usage for all disks = auto
```
Charts can be enabled/disabled for every mount separately, just look for the name of the mount after `[plugin:proc:diskspace:`.
```conf
[plugin:proc:diskspace:/]
# space usage = auto
# inodes usage = auto
```
> for disks performance monitoring, see the `proc` plugin, [here](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md#monitoring-disks)

View File

@ -0,0 +1 @@
integrations/disk_space.md

View File

@ -0,0 +1,135 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/diskspace.plugin/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/diskspace.plugin/metadata.yaml"
sidebar_label: "Disk space"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Disk space
Plugin: diskspace.plugin
Module: diskspace.plugin
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Disk space metrics for proficient storage management. Keep track of usage, free space, and error rates to prevent disk space issues.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
The plugin reads data from `/proc/self/mountinfo` and `/proc/diskstats file`.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per mount point
Labels:
| Label | Description |
|:-----------|:----------------|
| mount_point | Path used to mount a filesystem |
| filesystem | The filesystem used to format a partition. |
| mount_root | Root directory where mount points are present. |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| disk.space | avail, used, reserved_for_root | GiB |
| disk.inodes | avail, used, reserved_for_root | inodes |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ disk_space_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/disks.conf) | disk.space | disk ${label:mount_point} space utilization |
| [ disk_inode_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/disks.conf) | disk.inodes | disk ${label:mount_point} inode utilization |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:proc:diskspace]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
You can also specify per mount point `[plugin:proc:diskspace:mountpoint]`
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| remove charts of unmounted disks | Remove chart when a device is unmounted on host. | | False |
| check for new mount points every | Parse proc files frequency. | | False |
| exclude space metrics on paths | Do not show metrics (charts) for listed paths. This option accepts netdata simple pattern. | | False |
| exclude space metrics on filesystems | Do not show metrics (charts) for listed filesystems. This option accepts netdata simple pattern. | | False |
| exclude inode metrics on filesystems | Do not show metrics (charts) for listed filesystems. This option accepts netdata simple pattern. | | False |
| space usage for all disks | Define if plugin will show metrics for space usage. When value is set to `auto` plugin will try to access information to display if filesystem or path was not discarded with previous option. | | False |
| inodes usage for all disks | Define if plugin will show metrics for inode usage. When value is set to `auto` plugin will try to access information to display if filesystem or path was not discarded with previous option. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,170 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_cachestat.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Cachestat"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Cachestat
Plugin: ebpf.plugin
Module: cachestat
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor Linux page cache events giving for users a general vision about how his kernel is manipulating files.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Cachestat instance
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.cachestat_ratio | ratio | % |
| mem.cachestat_dirties | dirty | page/s |
| mem.cachestat_hits | hit | hits/s |
| mem.cachestat_misses | miss | misses/s |
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.cachestat_ratio | a dimension per app group | % |
| apps.cachestat_dirties | a dimension per app group | page/s |
| apps.cachestat_hits | a dimension per app group | hits/s |
| apps.cachestat_misses | a dimension per app group | misses/s |
### Per cgroup
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.cachestat_ratio | ratio | % |
| cgroup.cachestat_dirties | dirty | page/s |
| cgroup.cachestat_hits | hit | hits/s |
| cgroup.cachestat_misses | miss | misses/s |
| services.cachestat_ratio | a dimension per systemd service | % |
| services.cachestat_dirties | a dimension per systemd service | page/s |
| services.cachestat_hits | a dimension per systemd service | hits/s |
| services.cachestat_misses | a dimension per systemd service | misses/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/cachestat.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/cachestat.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,168 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_dcstat.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF DCstat"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF DCstat
Plugin: ebpf.plugin
Module: dcstat
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor directory cache events per application given an overall vision about files on memory or storage device.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.dc_ratio | a dimension per app group | % |
| apps.dc_reference | a dimension per app group | files |
| apps.dc_not_cache | a dimension per app group | files |
| apps.dc_not_found | a dimension per app group | files |
### Per filesystem
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| filesystem.dc_reference | reference, slow, miss | files |
| filesystem.dc_hit_ratio | ratio | % |
### Per cgroup
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.dc_ratio | ratio | % |
| cgroup.dc_reference | reference | files |
| cgroup.dc_not_cache | slow | files |
| cgroup.dc_not_found | miss | files |
| services.dc_ratio | a dimension per systemd service | % |
| services.dc_reference | a dimension per systemd service | files |
| services.dc_not_cache | a dimension per systemd service | files |
| services.dc_not_found | a dimension per systemd service | files |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/dcstat.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/dcstat.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config option</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,132 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_disk.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Disk"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Disk
Plugin: ebpf.plugin
Module: disk
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Measure latency for I/O events on disk.
Attach tracepoints to internal kernel functions.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per disk
These metrics measure latency for I/O events on every hard disk present on host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| disk.latency_io | latency | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).`
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/disk.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/disk.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,168 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_filedescriptor.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Filedescriptor"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Filedescriptor
Plugin: ebpf.plugin
Module: filedescriptor
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor calls for functions responsible to open or close a file descriptor and possible errors.
Attach tracing (kprobe and trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netdata sets necessary permissions during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
Depending of kernel version and frequency that files are open and close, this thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
These Metrics show grouped information per cgroup/service.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.fd_open | open | calls/s |
| cgroup.fd_open_error | open | calls/s |
| cgroup.fd_closed | close | calls/s |
| cgroup.fd_close_error | close | calls/s |
| services.file_open | a dimension per systemd service | calls/s |
| services.file_open_error | a dimension per systemd service | calls/s |
| services.file_closed | a dimension per systemd service | calls/s |
| services.file_close_error | a dimension per systemd service | calls/s |
### Per eBPF Filedescriptor instance
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| filesystem.file_descriptor | open, close | calls/s |
| filesystem.file_error | open, close | calls/s |
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.file_open | a dimension per app group | calls/s |
| apps.file_open_error | a dimension per app group | calls/s |
| apps.file_closed | a dimension per app group | calls/s |
| apps.file_close_error | a dimension per app group | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/fd.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/fd.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,158 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_filesystem.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Filesystem"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Filesystem
Plugin: ebpf.plugin
Module: filesystem
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor latency for main actions on filesystem like I/O events.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per filesystem
Latency charts associate with filesystem actions.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| filesystem.read_latency | latency period | calls/s |
| filesystem.open_latency | latency period | calls/s |
| filesystem.sync_latency | latency period | calls/s |
### Per iilesystem
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| filesystem.write_latency | latency period | calls/s |
### Per eBPF Filesystem instance
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| filesystem.attributte_latency | latency period | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/filesystem.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/filesystem.conf
```
#### Options
This configuration file have two different sections. The `[global]` overwrites default options, while `[filesystem]` allow user to select the filesystems to monitor.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
| btrfsdist | Enable or disable latency monitoring for functions associated with btrfs filesystem. | | False |
| ext4dist | Enable or disable latency monitoring for functions associated with ext4 filesystem. | | False |
| nfsdist | Enable or disable latency monitoring for functions associated with nfs filesystem. | | False |
| xfsdist | Enable or disable latency monitoring for functions associated with xfs filesystem. | | False |
| zfsdist | Enable or disable latency monitoring for functions associated with zfs filesystem. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,132 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_hardirq.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Hardirq"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Hardirq
Plugin: ebpf.plugin
Module: hardirq
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor latency for each HardIRQ available.
Attach tracepoints to internal kernel functions.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Hardirq instance
These metrics show latest timestamp for each hardIRQ available on host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.hardirq_latency | hardirq names | milliseconds |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/hardirq.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/hardirq.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,127 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_mdflush.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF MDflush"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF MDflush
Plugin: ebpf.plugin
Module: mdflush
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor when flush events happen between disks.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that `md_flush_request` is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF MDflush instance
Number of times md_flush_request was called since last time.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mdstat.mdstat_flush | disk | flushes |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/mdflush.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/mdflush.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,135 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_mount.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Mount"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Mount
Plugin: ebpf.plugin
Module: mount
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor calls for mount and umount syscall.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT, CONFIG_HAVE_SYSCALL_TRACEPOINTS), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Mount instance
Calls for syscalls mount an umount.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mount_points.call | mount, umount | calls/s |
| mount_points.error | mount, umount | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).`
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/mount.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/mount.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,151 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_oomkill.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF OOMkill"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF OOMkill
Plugin: ebpf.plugin
Module: oomkill
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor applications that reach out of memory.
Attach tracepoint to internal kernel functions.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
These metrics show cgroup/service that reached OOM.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.oomkills | cgroup name | kills |
| services.oomkills | a dimension per systemd service | kills |
### Per apps
These metrics show cgroup/service that reached OOM.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.oomkills | a dimension per app group | kills |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/oomkill.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/oomkill.conf
```
#### Options
Overwrite default configuration reducing number of I/O events
#### Examples
There are no configuration examples.
## Troubleshooting
### update every
### ebpf load mode
### lifetime

View File

@ -0,0 +1,106 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_process.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Process"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Process
Plugin: ebpf.plugin
Module: process
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor internal memory usage.
Uses netdata internal statistic to monitor memory management by plugin.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Process instance
How plugin is allocating memory.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| netdata.ebpf_aral_stat_size | memory | bytes |
| netdata.ebpf_aral_stat_alloc | aral | calls |
| netdata.ebpf_threads | total, running | threads |
| netdata.ebpf_load_methods | legacy, co-re | methods |
| netdata.ebpf_kernel_memory | memory_locked | bytes |
| netdata.ebpf_hash_tables_count | hash_table | hash tables |
| netdata.ebpf_aral_stat_size | memory | bytes |
| netdata.ebpf_aral_stat_alloc | aral | calls |
| netdata.ebpf_aral_stat_size | memory | bytes |
| netdata.ebpf_aral_stat_alloc | aral | calls |
| netdata.ebpf_hash_tables_insert_pid_elements | thread | rows |
| netdata.ebpf_hash_tables_remove_pid_elements | thread | rows |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Netdata flags.
To have these charts you need to compile netdata with flag `NETDATA_DEV_MODE`.
### Configuration
#### File
There is no configuration file.
#### Options
There are no configuration options.
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,178 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_processes.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Processes"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Processes
Plugin: ebpf.plugin
Module: processes
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor calls for function creating tasks (threads and processes) inside Linux kernel.
Attach tracing (kprobe or tracepoint, and trampoline) to internal kernel functions.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Processes instance
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.process_thread | process | calls/s |
| system.process_status | process, zombie | difference |
| system.exit | process | calls/s |
| system.task_error | task | calls/s |
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.process_create | a dimension per app group | calls/s |
| apps.thread_create | a dimension per app group | calls/s |
| apps.task_exit | a dimension per app group | calls/s |
| apps.task_close | a dimension per app group | calls/s |
| apps.task_error | a dimension per app group | calls/s |
### Per cgroup
These Metrics show grouped information per cgroup/service.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.process_create | process | calls/s |
| cgroup.thread_create | thread | calls/s |
| cgroup.task_exit | exit | calls/s |
| cgroup.task_close | process | calls/s |
| cgroup.task_error | process | calls/s |
| services.process_create | a dimension per systemd service | calls/s |
| services.thread_create | a dimension per systemd service | calls/s |
| services.task_close | a dimension per systemd service | calls/s |
| services.task_exit | a dimension per systemd service | calls/s |
| services.task_error | a dimension per systemd service | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/process.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/process.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). This plugin will always try to attach a tracepoint, so option here will impact only function used to monitor task (thread and process) creation. | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,176 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_shm.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF SHM"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF SHM
Plugin: ebpf.plugin
Module: shm
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor syscall responsible to manipulate shared memory.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
These Metrics show grouped information per cgroup/service.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.shmget | get | calls/s |
| cgroup.shmat | at | calls/s |
| cgroup.shmdt | dt | calls/s |
| cgroup.shmctl | ctl | calls/s |
| services.shmget | a dimension per systemd service | calls/s |
| services.shmat | a dimension per systemd service | calls/s |
| services.shmdt | a dimension per systemd service | calls/s |
| services.shmctl | a dimension per systemd service | calls/s |
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.shmget_call | a dimension per app group | calls/s |
| apps.shmat_call | a dimension per app group | calls/s |
| apps.shmdt_call | a dimension per app group | calls/s |
| apps.shmctl_call | a dimension per app group | calls/s |
### Per eBPF SHM instance
These Metrics show number of calls for specified syscall.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.shared_memory_calls | get, at, dt, ctl | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).`
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/shm.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/shm.conf
```
#### Options
This configuration file have two different sections. The `[global]` overwrites all default options, while `[syscalls]` allow user to select the syscall to monitor.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
| shmget | Enable or disable monitoring for syscall `shmget` | | False |
| shmat | Enable or disable monitoring for syscall `shmat` | | False |
| shmdt | Enable or disable monitoring for syscall `shmdt` | | False |
| shmctl | Enable or disable monitoring for syscall `shmctl` | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,193 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_socket.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Socket"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Socket
Plugin: ebpf.plugin
Module: socket
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor bandwidth consumption per application for protocols TCP and UDP.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Socket instance
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ip.inbound_conn | connection_tcp | connections/s |
| ip.tcp_outbound_conn | received | connections/s |
| ip.tcp_functions | received, send, closed | calls/s |
| ip.total_tcp_bandwidth | received, send | kilobits/s |
| ip.tcp_error | received, send | calls/s |
| ip.tcp_retransmit | retransmited | calls/s |
| ip.udp_functions | received, send | calls/s |
| ip.total_udp_bandwidth | received, send | kilobits/s |
| ip.udp_error | received, send | calls/s |
### Per apps
These metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.outbound_conn_v4 | a dimension per app group | connections/s |
| apps.outbound_conn_v6 | a dimension per app group | connections/s |
| apps.total_bandwidth_sent | a dimension per app group | kilobits/s |
| apps.total_bandwidth_recv | a dimension per app group | kilobits/s |
| apps.bandwidth_tcp_send | a dimension per app group | calls/s |
| apps.bandwidth_tcp_recv | a dimension per app group | calls/s |
| apps.bandwidth_tcp_retransmit | a dimension per app group | calls/s |
| apps.bandwidth_udp_send | a dimension per app group | calls/s |
| apps.bandwidth_udp_recv | a dimension per app group | calls/s |
| services.net_conn_ipv4 | a dimension per systemd service | connections/s |
### Per cgroup
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.net_conn_ipv4 | connected_v4 | connections/s |
| cgroup.net_conn_ipv6 | connected_v6 | connections/s |
| cgroup.net_bytes_recv | received | calls/s |
| cgroup.net_bytes_sent | sent | calls/s |
| cgroup.net_tcp_recv | received | calls/s |
| cgroup.net_tcp_send | sent | calls/s |
| cgroup.net_retransmit | retransmitted | calls/s |
| cgroup.net_udp_send | sent | calls/s |
| cgroup.net_udp_recv | received | calls/s |
| services.net_conn_ipv6 | a dimension per systemd service | connections/s |
| services.net_bytes_recv | a dimension per systemd service | kilobits/s |
| services.net_bytes_sent | a dimension per systemd service | kilobits/s |
| services.net_tcp_recv | a dimension per systemd service | calls/s |
| services.net_tcp_send | a dimension per systemd service | calls/s |
| services.net_tcp_retransmit | a dimension per systemd service | calls/s |
| services.net_udp_send | a dimension per systemd service | calls/s |
| services.net_udp_recv | a dimension per systemd service | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/network.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/network.conf
```
#### Options
All options are defined inside section `[global]`. Options inside `network connections` are ignored for while.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| bandwidth table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ipv4 connection table size | Number of elements stored inside hash tables used to monitor calls per IPV4 connections. | | False |
| ipv6 connection table size | Number of elements stored inside hash tables used to monitor calls per IPV6 connections. | | False |
| udp connection table size | Number of temporary elements stored inside hash tables used to monitor UDP connections. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,132 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_softirq.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF SoftIRQ"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF SoftIRQ
Plugin: ebpf.plugin
Module: softirq
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor latency for each SoftIRQ available.
Attach kprobe to internal kernel functions.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF SoftIRQ instance
These metrics show latest timestamp for each softIRQ available on host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.softirq_latency | soft IRQs | milliseconds |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug/`).`
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/softirq.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/softirq.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,161 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_swap.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF SWAP"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF SWAP
Plugin: ebpf.plugin
Module: swap
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitors when swap has I/O events and applications executing events.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
These Metrics show grouped information per cgroup/service.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.swap_read | read | calls/s |
| cgroup.swap_write | write | calls/s |
| services.swap_read | a dimension per systemd service | calls/s |
| services.swap_write | a dimension per systemd service | calls/s |
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.swap_read_call | a dimension per app group | calls/s |
| apps.swap_write_call | a dimension per app group | calls/s |
### Per eBPF SWAP instance
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.swapcalls | write, read | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/swap.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/swap.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,152 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_sync.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Sync"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF Sync
Plugin: ebpf.plugin
Module: sync
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor syscall responsible to move data from memory to storage device.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT, CONFIG_HAVE_SYSCALL_TRACEPOINTS), files inside debugfs, and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per eBPF Sync instance
These metrics show total number of calls to functions inside kernel.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.file_sync | fsync, fdatasync | calls/s |
| mem.meory_map | msync | calls/s |
| mem.sync | sync, syncfs | calls/s |
| mem.file_segment | sync_file_range | calls/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ sync_freq ](https://github.com/netdata/netdata/blob/master/health/health.d/synchronization.conf) | mem.sync | number of sync() system calls. Every call causes all pending modifications to filesystem metadata and cached file data to be written to the underlying filesystems. |
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
#### Debug Filesystem
This thread needs to attach a tracepoint to monitor when a process schedule an exit event. To allow this specific feaure, it is necessary to mount `debugfs` (`mount -t debugfs none /sys/kernel/debug`).
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/sync.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/sync.conf
```
#### Options
This configuration file have two different sections. The `[global]` overwrites all default options, while `[syscalls]` allow user to select the syscall to monitor.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
| sync | Enable or disable monitoring for syscall `sync` | | False |
| msync | Enable or disable monitoring for syscall `msync` | | False |
| fsync | Enable or disable monitoring for syscall `fsync` | | False |
| fdatasync | Enable or disable monitoring for syscall `fdatasync` | | False |
| syncfs | Enable or disable monitoring for syscall `syncfs` | | False |
| sync_file_range | Enable or disable monitoring for syscall `sync_file_range` | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,203 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/integrations/ebpf_vfs.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF VFS"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# eBPF VFS
Plugin: ebpf.plugin
Module: vfs
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor I/O events on Linux Virtual Filesystem.
Attach tracing (kprobe, trampoline) to internal kernel functions according options used to compile kernel.
This collector is only supported on the following platforms:
- Linux
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid because it loads data inside kernel. Netada sets necessary permission during installation time.
### Default Behavior
#### Auto-Detection
The plugin checks kernel compilation flags (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) and presence of BTF files to decide which eBPF program will be attached.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
This thread will add overhead every time that an internal kernel function monitored by this thread is called. The estimated additional period of time is between 90-200ms per call on kernels that do not have BTF technology.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per cgroup
These Metrics show grouped information per cgroup/service.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cgroup.vfs_unlink | delete | calls/s |
| cgroup.vfs_write | write | calls/s |
| cgroup.vfs_write_error | write | calls/s |
| cgroup.vfs_read | read | calls/s |
| cgroup.vfs_read_error | read | calls/s |
| cgroup.vfs_write_bytes | write | bytes/s |
| cgroup.vfs_read_bytes | read | bytes/s |
| cgroup.vfs_fsync | fsync | calls/s |
| cgroup.vfs_fsync_error | fsync | calls/s |
| cgroup.vfs_open | open | calls/s |
| cgroup.vfs_open_error | open | calls/s |
| cgroup.vfs_create | create | calls/s |
| cgroup.vfs_create_error | create | calls/s |
| services.vfs_unlink | a dimension per systemd service | calls/s |
| services.vfs_write | a dimension per systemd service | calls/s |
| services.vfs_write_error | a dimension per systemd service | calls/s |
| services.vfs_read | a dimension per systemd service | calls/s |
| services.vfs_read_error | a dimension per systemd service | calls/s |
| services.vfs_write_bytes | a dimension per systemd service | bytes/s |
| services.vfs_read_bytes | a dimension per systemd service | bytes/s |
| services.vfs_fsync | a dimension per systemd service | calls/s |
| services.vfs_fsync_error | a dimension per systemd service | calls/s |
| services.vfs_open | a dimension per systemd service | calls/s |
| services.vfs_open_error | a dimension per systemd service | calls/s |
| services.vfs_create | a dimension per systemd service | calls/s |
| services.vfs_create_error | a dimension per systemd service | calls/s |
### Per eBPF VFS instance
These Metrics show grouped information per cgroup/service.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| filesystem.vfs_deleted_objects | delete | calls/s |
| filesystem.vfs_io | read, write | calls/s |
| filesystem.vfs_io_bytes | read, write | bytes/s |
| filesystem.vfs_io_error | read, write | calls/s |
| filesystem.vfs_fsync | fsync | calls/s |
| filesystem.vfs_fsync_error | fsync | calls/s |
| filesystem.vfs_open | open | calls/s |
| filesystem.vfs_open_error | open | calls/s |
| filesystem.vfs_create | create | calls/s |
| filesystem.vfs_create_error | create | calls/s |
### Per apps
These Metrics show grouped information per apps group.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| apps.file_deleted | a dimension per app group | calls/s |
| apps.vfs_write_call | a dimension per app group | calls/s |
| apps.vfs_write_error | a dimension per app group | calls/s |
| apps.vfs_read_call | a dimension per app group | calls/s |
| apps.vfs_read_error | a dimension per app group | calls/s |
| apps.vfs_write_bytes | a dimension per app group | bytes/s |
| apps.vfs_read_bytes | a dimension per app group | bytes/s |
| apps.vfs_fsync | a dimension per app group | calls/s |
| apps.vfs_fsync_error | a dimension per app group | calls/s |
| apps.vfs_open | a dimension per app group | calls/s |
| apps.vfs_open_error | a dimension per app group | calls/s |
| apps.vfs_create | a dimension per app group | calls/s |
| apps.vfs_create_error | a dimension per app group | calls/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
#### Compile kernel
Check if your kernel was compiled with necessary options (CONFIG_KPROBES, CONFIG_BPF, CONFIG_BPF_SYSCALL, CONFIG_BPF_JIT) in `/proc/config.gz` or inside /boot/config file. Some cited names can be different accoring preferences of Linux distributions.
When you do not have options set, it is necessary to get the kernel source code from https://kernel.org or a kernel package from your distribution, this last is preferred. The kernel compilation has a well definedd pattern, but distributions can deliver their configuration files
with different names.
Now follow steps:
1. Copy the configuration file to /usr/src/linux/.config.
2. Select the necessary options: make oldconfig
3. Compile your kernel image: make bzImage
4. Compile your modules: make modules
5. Copy your new kernel image for boot loader directory
6. Install the new modules: make modules_install
7. Generate an initial ramdisk image (`initrd`) if it is necessary.
8. Update your boot loader
### Configuration
#### File
The configuration file name for this integration is `ebpf.d/vfs.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ebpf.d/vfs.conf
```
#### Options
All options are defined inside section `[global]`.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update every | Data collection frequency. | | False |
| ebpf load mode | Define whether plugin will monitor the call (`entry`) for the functions or it will also monitor the return (`return`). | | False |
| apps | Enable or disable integration with apps.plugin | | False |
| cgroups | Enable or disable integration with cgroup.plugin | | False |
| pid table size | Number of elements stored inside hash tables used to monitor calls per PID. | | False |
| ebpf type format | Define the file type to load an eBPF program. Three options are available: `legacy` (Attach only `kprobe`), `co-re` (Plugin tries to use `trampoline` when available), and `auto` (plugin check OS configuration before to load). | | False |
| ebpf co-re tracing | Select the attach method used by plugin when `co-re` is defined in previous option. Two options are available: `trampoline` (Option with lowest overhead), and `probe` (the same of legacy code). | | False |
| maps per core | Define how plugin will load their hash maps. When enabled (`yes`) plugin will load one hash table per core, instead to have centralized information. | | False |
| lifetime | Set default lifetime for thread when enabled by cloud. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,106 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/dev.cpu.0.freq.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "dev.cpu.0.freq"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# dev.cpu.0.freq
Plugin: freebsd.plugin
Module: dev.cpu.0.freq
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Read current CPU Scaling frequency.
Current CPU Scaling Frequency
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per dev.cpu.0.freq instance
The metric shows status of CPU frequency, it is direct affected by system load.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cpu.scaling_cur_freq | frequency | MHz |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `Config options`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config Config options
```
#### Options
<details><summary></summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| dev.cpu.0.freq | Enable or disable CPU Scaling frequency metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,115 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/dev.cpu.temperature.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "dev.cpu.temperature"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# dev.cpu.temperature
Plugin: freebsd.plugin
Module: dev.cpu.temperature
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Get current CPU temperature
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per dev.cpu.temperature instance
This metric show latest CPU temperature.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cpu.temperature | a dimension per core | Celsius |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| dev.cpu.temperature | Enable or disable CPU temperature metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,150 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/devstat.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "devstat"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# devstat
Plugin: freebsd.plugin
Module: devstat
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information per hard disk available on host.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per devstat instance
These metrics give a general vision about I/O events on disks.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.io | io, out | KiB/s |
### Per disk
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| disk.io | reads, writes, frees | KiB/s |
| disk.ops | reads, writes, other, frees | operations/s |
| disk.qops | operations | operations |
| disk.util | utilization | % of time working |
| disk.iotime | reads, writes, other, frees | milliseconds/s |
| disk.await | reads, writes, other, frees | milliseconds/operation |
| disk.avgsz | reads, writes, frees | KiB/operation |
| disk.svctm | svctm | milliseconds/operation |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ 10min_disk_utilization ](https://github.com/netdata/netdata/blob/master/health/health.d/disks.conf) | disk.util | average percentage of time ${label:device} disk was busy over the last 10 minutes |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:kern.devstat]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| enable new disks detected at runtime | Enable or disable possibility to detect new disks. | | False |
| performance metrics for pass devices | Enable or disable metrics for disks with type `PASS`. | | False |
| total bandwidth for all disks | Enable or disable total bandwidth metric for all disks. | | False |
| bandwidth for all disks | Enable or disable bandwidth for all disks metric. | | False |
| operations for all disks | Enable or disable operations for all disks metric. | | False |
| queued operations for all disks | Enable or disable queued operations for all disks metric. | | False |
| utilization percentage for all disks | Enable or disable utilization percentage for all disks metric. | | False |
| i/o time for all disks | Enable or disable I/O time for all disks metric. | | False |
| average completed i/o time for all disks | Enable or disable average completed I/O time for all disks metric. | | False |
| average completed i/o bandwidth for all disks | Enable or disable average completed I/O bandwidth for all disks metric. | | False |
| average service time for all disks | Enable or disable average service time for all disks metric. | | False |
| disable by default disks matching | Do not create charts for disks listed. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,162 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/getifaddrs.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "getifaddrs"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# getifaddrs
Plugin: freebsd.plugin
Module: getifaddrs
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect traffic per network interface.
The plugin calls `getifaddrs` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per getifaddrs instance
General overview about network traffic.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.net | received, sent | kilobits/s |
| system.packets | received, sent, multicast_received, multicast_sent | packets/s |
| system.ipv4 | received, sent | kilobits/s |
| system.ipv6 | received, sent | kilobits/s |
### Per network device
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| net.net | received, sent | kilobits/s |
| net.packets | received, sent, multicast_received, multicast_sent | packets/s |
| net.errors | inbound, outbound | errors/s |
| net.drops | inbound, outbound | drops/s |
| net.events | collisions | events/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ interface_speed ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.net | network interface ${label:device} current speed |
| [ 1m_received_traffic_overflow ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.net | average inbound utilization for the network interface ${label:device} over the last minute |
| [ 1m_sent_traffic_overflow ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.net | average outbound utilization for the network interface ${label:device} over the last minute |
| [ inbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.packets | ratio of inbound dropped packets for the network interface ${label:device} over the last 10 minutes |
| [ outbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.packets | ratio of outbound dropped packets for the network interface ${label:device} over the last 10 minutes |
| [ wifi_inbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.packets | ratio of inbound dropped packets for the network interface ${label:device} over the last 10 minutes |
| [ wifi_outbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.packets | ratio of outbound dropped packets for the network interface ${label:device} over the last 10 minutes |
| [ 1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.packets | average number of packets received by the network interface ${label:device} over the last minute |
| [ 10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute |
| [ interface_inbound_errors ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.errors | number of inbound errors for the network interface ${label:device} in the last 10 minutes |
| [ interface_outbound_errors ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.errors | number of outbound errors for the network interface ${label:device} in the last 10 minutes |
| [ inbound_packets_dropped ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.drops | number of inbound dropped packets for the network interface ${label:device} in the last 10 minutes |
| [ outbound_packets_dropped ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.drops | number of outbound dropped packets for the network interface ${label:device} in the last 10 minutes |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:getifaddrs]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| enable new interfaces detected at runtime | Enable or disable possibility to discover new interface after plugin starts. | | False |
| total bandwidth for physical interfaces | Enable or disable total bandwidth for physical interfaces metric. | | False |
| total packets for physical interfaces | Enable or disable total packets for physical interfaces metric. | | False |
| total bandwidth for ipv4 interface | Enable or disable total bandwidth for IPv4 interface metric. | | False |
| total bandwidth for ipv6 interfaces | Enable or disable total bandwidth for ipv6 interfaces metric. | | False |
| bandwidth for all interfaces | Enable or disable bandwidth for all interfaces metric. | | False |
| packets for all interfaces | Enable or disable packets for all interfaces metric. | | False |
| errors for all interfaces | Enable or disable errors for all interfaces metric. | | False |
| drops for all interfaces | Enable or disable drops for all interfaces metric. | | False |
| collisions for all interface | Enable or disable collisions for all interface metric. | | False |
| disable by default interfaces matching | Do not display data for intterfaces listed. | | False |
| set physical interfaces for system.net | Do not show network traffic for listed interfaces. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,126 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/getmntinfo.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "getmntinfo"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# getmntinfo
Plugin: freebsd.plugin
Module: getmntinfo
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information per mount point.
The plugin calls `getmntinfo` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per mount point
These metrics show detailss about mount point usages.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| disk.space | avail, used, reserved_for_root | GiB |
| disk.inodes | avail, used, reserved_for_root | inodes |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ disk_space_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/disks.conf) | disk.space | disk ${label:mount_point} space utilization |
| [ disk_inode_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/disks.conf) | disk.inodes | disk ${label:mount_point} inode utilization |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:getmntinfo]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| enable new mount points detected at runtime | Cheeck new mount points during runtime. | | False |
| space usage for all disks | Enable or disable space usage for all disks metric. | | False |
| inodes usage for all disks | Enable or disable inodes usage for all disks metric. | | False |
| exclude space metrics on paths | Do not show metrics for listed paths. | | False |
| exclude space metrics on filesystems | Do not monitor listed filesystems. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,116 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/hw.intrcnt.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "hw.intrcnt"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# hw.intrcnt
Plugin: freebsd.plugin
Module: hw.intrcnt
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Get total number of interrupts
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per hw.intrcnt instance
These metrics show system interrupts frequency.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.intr | interrupts | interrupts/s |
| system.interrupts | a dimension per interrupt | interrupts/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config option</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| hw.intrcnt | Enable or disable Interrupts metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,121 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/ipfw.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "ipfw"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# ipfw
Plugin: freebsd.plugin
Module: ipfw
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information about FreeBSD firewall.
The plugin uses RAW socket to communicate with kernel and collect data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per ipfw instance
Theese metrics show FreeBSD firewall statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipfw.mem | dynamic, static | bytes |
| ipfw.packets | a dimension per static rule | packets/s |
| ipfw.bytes | a dimension per static rule | bytes/s |
| ipfw.active | a dimension per dynamic rule | rules |
| ipfw.expired | a dimension per dynamic rule | rules |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:ipfw]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| counters for static rules | Enable or disable counters for static rules metric. | | False |
| number of dynamic rules | Enable or disable number of dynamic rules metric. | | False |
| allocated memory | Enable or disable allocated memory metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,134 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/kern.cp_time.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "kern.cp_time"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# kern.cp_time
Plugin: freebsd.plugin
Module: kern.cp_time
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Total CPU utilization
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per kern.cp_time instance
These metrics show CPU usage statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.cpu | nice, system, user, interrupt, idle | percentage |
### Per core
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cpu.cpu | nice, system, user, interrupt, idle | percentage |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ 10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cpu.conf) | system.cpu | average CPU utilization over the last 10 minutes (excluding iowait, nice and steal) |
| [ 10min_cpu_iowait ](https://github.com/netdata/netdata/blob/master/health/health.d/cpu.conf) | system.cpu | average CPU iowait time over the last 10 minutes |
| [ 20min_steal_cpu ](https://github.com/netdata/netdata/blob/master/health/health.d/cpu.conf) | system.cpu | average CPU steal time over the last 20 minutes |
| [ 10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/cpu.conf) | system.cpu | average CPU utilization over the last 10 minutes (excluding nice) |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
The netdata main configuration file.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| kern.cp_time | Enable or disable Total CPU usage. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,117 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/kern.ipc.msq.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "kern.ipc.msq"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# kern.ipc.msq
Plugin: freebsd.plugin
Module: kern.ipc.msq
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect number of IPC message Queues
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per kern.ipc.msq instance
These metrics show statistics IPC messages statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.ipc_msq_queues | queues | queues |
| system.ipc_msq_messages | messages | messages |
| system.ipc_msq_size | allocated, used | bytes |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| kern.ipc.msq | Enable or disable IPC message queue metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,122 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/kern.ipc.sem.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "kern.ipc.sem"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# kern.ipc.sem
Plugin: freebsd.plugin
Module: kern.ipc.sem
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information about semaphore.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per kern.ipc.sem instance
These metrics shows counters for semaphores on host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.ipc_semaphores | semaphores | semaphores |
| system.ipc_semaphore_arrays | arrays | arrays |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ semaphores_used ](https://github.com/netdata/netdata/blob/master/health/health.d/ipc.conf) | system.ipc_semaphores | IPC semaphore utilization |
| [ semaphore_arrays_used ](https://github.com/netdata/netdata/blob/master/health/health.d/ipc.conf) | system.ipc_semaphore_arrays | IPC semaphore arrays utilization |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| kern.ipc.sem | Enable or disable semaphore metrics. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,116 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/kern.ipc.shm.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "kern.ipc.shm"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# kern.ipc.shm
Plugin: freebsd.plugin
Module: kern.ipc.shm
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect shared memory information.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per kern.ipc.shm instance
These metrics give status about current shared memory segments.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.ipc_shared_mem_segs | segments | segments |
| system.ipc_shared_mem_size | allocated | KiB |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| kern.ipc.shm | Enable or disable shared memory metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,119 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet.icmp.stats.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet.icmp.stats"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet.icmp.stats
Plugin: freebsd.plugin
Module: net.inet.icmp.stats
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information about ICMP traffic.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet.icmp.stats instance
These metrics show ICMP connections statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv4.icmp | received, sent | packets/s |
| ipv4.icmp_errors | InErrors, OutErrors, InCsumErrors | packets/s |
| ipv4.icmpmsg | InEchoReps, OutEchoReps, InEchos, OutEchos | packets/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.inet.icmp.stats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| IPv4 ICMP packets | Enable or disable IPv4 ICMP packets metric. | | False |
| IPv4 ICMP error | Enable or disable IPv4 ICMP error metric. | | False |
| IPv4 ICMP messages | Enable or disable IPv4 ICMP messages metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,121 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet.ip.stats.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet.ip.stats"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet.ip.stats
Plugin: freebsd.plugin
Module: net.inet.ip.stats
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect IP stats
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet.ip.stats instance
These metrics show IPv4 connections statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv4.packets | received, sent, forwarded, delivered | packets/s |
| ipv4.fragsout | ok, failed, created | packets/s |
| ipv4.fragsin | ok, failed, all | packets/s |
| ipv4.errors | InDiscards, OutDiscards, InHdrErrors, OutNoRoutes, InAddrErrors, InUnknownProtos | packets/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.inet.ip.stats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| ipv4 packets | Enable or disable IPv4 packets metric. | | False |
| ipv4 fragments sent | Enable or disable IPv4 fragments sent metric. | | False |
| ipv4 fragments assembly | Enable or disable IPv4 fragments assembly metric. | | False |
| ipv4 errors | Enable or disable IPv4 errors metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,120 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet.tcp.states.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet.tcp.states"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet.tcp.states
Plugin: freebsd.plugin
Module: net.inet.tcp.states
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet.tcp.states instance
A counter for TCP connections.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv4.tcpsock | connections | active connections |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ tcp_connections ](https://github.com/netdata/netdata/blob/master/health/health.d/tcp_conn.conf) | ipv4.tcpsock | IPv4 TCP connections utilization |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| net.inet.tcp.states | Enable or disable TCP state metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,137 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet.tcp.stats.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet.tcp.stats"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet.tcp.stats
Plugin: freebsd.plugin
Module: net.inet.tcp.stats
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect overall information about TCP connections.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet.tcp.stats instance
These metrics show TCP connections statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv4.tcppackets | received, sent | packets/s |
| ipv4.tcperrors | InErrs, InCsumErrors, RetransSegs | packets/s |
| ipv4.tcphandshake | EstabResets, ActiveOpens, PassiveOpens, AttemptFails | events/s |
| ipv4.tcpconnaborts | baddata, userclosed, nomemory, timeout, linger | connections/s |
| ipv4.tcpofo | inqueue | packets/s |
| ipv4.tcpsyncookies | received, sent, failed | packets/s |
| ipv4.tcplistenissues | overflows | packets/s |
| ipv4.ecnpkts | InCEPkts, InECT0Pkts, InECT1Pkts, OutECT0Pkts, OutECT1Pkts | packets/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ 1m_ipv4_tcp_resets_sent ](https://github.com/netdata/netdata/blob/master/health/health.d/tcp_resets.conf) | ipv4.tcphandshake | average number of sent TCP RESETS over the last minute |
| [ 10s_ipv4_tcp_resets_sent ](https://github.com/netdata/netdata/blob/master/health/health.d/tcp_resets.conf) | ipv4.tcphandshake | average number of sent TCP RESETS over the last 10 seconds. This can indicate a port scan, or that a service running on this host has crashed. Netdata will not send a clear notification for this alarm. |
| [ 1m_ipv4_tcp_resets_received ](https://github.com/netdata/netdata/blob/master/health/health.d/tcp_resets.conf) | ipv4.tcphandshake | average number of received TCP RESETS over the last minute |
| [ 10s_ipv4_tcp_resets_received ](https://github.com/netdata/netdata/blob/master/health/health.d/tcp_resets.conf) | ipv4.tcphandshake | average number of received TCP RESETS over the last 10 seconds. This can be an indication that a service this host needs has crashed. Netdata will not send a clear notification for this alarm. |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.inet.tcp.stats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| ipv4 TCP packets | Enable or disable ipv4 TCP packets metric. | | False |
| ipv4 TCP errors | Enable or disable pv4 TCP errors metric. | | False |
| ipv4 TCP handshake issues | Enable or disable ipv4 TCP handshake issue metric. | | False |
| TCP connection aborts | Enable or disable TCP connection aborts metric. | | False |
| TCP out-of-order queue | Enable or disable TCP out-of-order queue metric. | | False |
| TCP SYN cookies | Enable or disable TCP SYN cookies metric. | | False |
| TCP listen issues | Enable or disable TCP listen issues metric. | | False |
| ECN packets | Enable or disable ECN packets metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,123 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet.udp.stats.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet.udp.stats"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet.udp.stats
Plugin: freebsd.plugin
Module: net.inet.udp.stats
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information about UDP connections.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet.udp.stats instance
These metrics show UDP connections statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv4.udppackets | received, sent | packets/s |
| ipv4.udperrors | InErrors, NoPorts, RcvbufErrors, InCsumErrors, IgnoredMulti | events/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ 1m_ipv4_udp_receive_buffer_errors ](https://github.com/netdata/netdata/blob/master/health/health.d/udp_errors.conf) | ipv4.udperrors | average number of UDP receive buffer errors over the last minute |
| [ 1m_ipv4_udp_send_buffer_errors ](https://github.com/netdata/netdata/blob/master/health/health.d/udp_errors.conf) | ipv4.udperrors | average number of UDP send buffer errors over the last minute |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.inet.udp.stats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| ipv4 UDP packets | Enable or disable ipv4 UDP packets metric. | | False |
| ipv4 UDP errors | Enable or disable ipv4 UDP errors metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,127 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet6.icmp6.stats.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet6.icmp6.stats"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet6.icmp6.stats
Plugin: freebsd.plugin
Module: net.inet6.icmp6.stats
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information abou IPv6 ICMP
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet6.icmp6.stats instance
Collect IPv6 ICMP traffic statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv6.icmp | received, sent | messages/s |
| ipv6.icmpredir | received, sent | redirects/s |
| ipv6.icmperrors | InErrors, OutErrors, InCsumErrors, InDestUnreachs, InPktTooBigs, InTimeExcds, InParmProblems, OutDestUnreachs, OutTimeExcds, OutParmProblems | errors/s |
| ipv6.icmpechos | InEchos, OutEchos, InEchoReplies, OutEchoReplies | messages/s |
| ipv6.icmprouter | InSolicits, OutSolicits, InAdvertisements, OutAdvertisements | messages/s |
| ipv6.icmpneighbor | InSolicits, OutSolicits, InAdvertisements, OutAdvertisements | messages/s |
| ipv6.icmptypes | InType1, InType128, InType129, InType136, OutType1, OutType128, OutType129, OutType133, OutType135, OutType143 | messages/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.inet6.icmp6.stats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| icmp | Enable or disable ICMP metric. | | False |
| icmp redirects | Enable or disable ICMP redirects metric. | | False |
| icmp errors | Enable or disable ICMP errors metric. | | False |
| icmp echos | Enable or disable ICMP echos metric. | | False |
| icmp router | Enable or disable ICMP router metric. | | False |
| icmp neighbor | Enable or disable ICMP neighbor metric. | | False |
| icmp types | Enable or disable ICMP types metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,121 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.inet6.ip6.stats.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.inet6.ip6.stats"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.inet6.ip6.stats
Plugin: freebsd.plugin
Module: net.inet6.ip6.stats
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information abou IPv6 stats.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.inet6.ip6.stats instance
These metrics show general information about IPv6 connections.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipv6.packets | received, sent, forwarded, delivers | packets/s |
| ipv6.fragsout | ok, failed, all | packets/s |
| ipv6.fragsin | ok, failed, timeout, all | packets/s |
| ipv6.errors | InDiscards, OutDiscards, InHdrErrors, InAddrErrors, InTruncatedPkts, InNoRoutes, OutNoRoutes | packets/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.inet6.ip6.stats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| ipv6 packets | Enable or disable ipv6 packet metric. | | False |
| ipv6 fragments sent | Enable or disable ipv6 fragments sent metric. | | False |
| ipv6 fragments assembly | Enable or disable ipv6 fragments assembly metric. | | False |
| ipv6 errors | Enable or disable ipv6 errors metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,135 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/net.isr.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "net.isr"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# net.isr
Plugin: freebsd.plugin
Module: net.isr
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information about system softnet stat.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per net.isr instance
These metrics show statistics about softnet stats.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.softnet_stat | dispatched, hybrid_dispatched, qdrops, queued | events/s |
### Per core
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| cpu.softnet_stat | dispatched, hybrid_dispatched, qdrops, queued | events/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ 1min_netdev_backlog_exceeded ](https://github.com/netdata/netdata/blob/master/health/health.d/softnet.conf) | system.softnet_stat | average number of dropped packets in the last minute due to exceeded net.core.netdev_max_backlog |
| [ 1min_netdev_budget_ran_outs ](https://github.com/netdata/netdata/blob/master/health/health.d/softnet.conf) | system.softnet_stat | average number of times ksoftirq ran out of sysctl net.core.netdev_budget or net.core.netdev_budget_usecs with work remaining over the last minute (this can be a cause for dropped packets) |
| [ 10min_netisr_backlog_exceeded ](https://github.com/netdata/netdata/blob/master/health/health.d/softnet.conf) | system.softnet_stat | average number of drops in the last minute due to exceeded sysctl net.route.netisr_maxqlen (this can be a cause for dropped packets) |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:net.isr]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| netisr | Enable or disable general vision about softnet stat metrics. | | False |
| netisr per core | Enable or disable softnet stat metric per core. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,124 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/system.ram.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "system.ram"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# system.ram
Plugin: freebsd.plugin
Module: system.ram
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Show information about system memory usage.
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per system.ram instance
This metric shows RAM usage statistics.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.ram | free, active, inactive, wired, cache, laundry, buffers | MiB |
| mem.available | avail | MiB |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/ram.conf) | system.ram | system memory utilization |
| [ ram_in_use ](https://github.com/netdata/netdata/blob/master/health/health.d/ram.conf) | system.ram | system memory utilization |
| [ ram_available ](https://github.com/netdata/netdata/blob/master/health/health.d/ram.conf) | mem.available | percentage of estimated amount of RAM available for userspace processes, without causing swapping |
| [ ram_available ](https://github.com/netdata/netdata/blob/master/health/health.d/ram.conf) | mem.available | percentage of estimated amount of RAM available for userspace processes, without causing swapping |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| system.ram | Enable or disable system RAM metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,115 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/uptime.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "uptime"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# uptime
Plugin: freebsd.plugin
Module: uptime
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Show period of time server is up.
The plugin calls `clock_gettime` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per uptime instance
How long the system is running.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.uptime | uptime | seconds |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.loadavg | Enable or disable load average metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,123 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.loadavg.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.loadavg"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.loadavg
Plugin: freebsd.plugin
Module: vm.loadavg
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
System Load Average
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.loadavg instance
Monitoring for number of threads running or waiting.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.load | load1, load5, load15 | load |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ load_cpu_number ](https://github.com/netdata/netdata/blob/master/health/health.d/load.conf) | system.load | number of active CPU cores in the system |
| [ load_average_15 ](https://github.com/netdata/netdata/blob/master/health/health.d/load.conf) | system.load | system fifteen-minute load average |
| [ load_average_5 ](https://github.com/netdata/netdata/blob/master/health/health.d/load.conf) | system.load | system five-minute load average |
| [ load_average_1 ](https://github.com/netdata/netdata/blob/master/health/health.d/load.conf) | system.load | system one-minute load average |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.loadavg | Enable or disable load average metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,115 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.stats.sys.v_intr.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.stats.sys.v_intr"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.stats.sys.v_intr
Plugin: freebsd.plugin
Module: vm.stats.sys.v_intr
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Device interrupts
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.stats.sys.v_intr instance
The metric show device interrupt frequency.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.dev_intr | interrupts | interrupts/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config option</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.stats.sys.v_intr | Enable or disable device interrupts metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,115 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.stats.sys.v_soft.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.stats.sys.v_soft"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.stats.sys.v_soft
Plugin: freebsd.plugin
Module: vm.stats.sys.v_soft
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Software Interrupt
vm.stats.sys.v_soft
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.stats.sys.v_soft instance
This metric shows software interrupt frequency.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.soft_intr | interrupts | interrupts/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config option</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.stats.sys.v_soft | Enable or disable software inerrupts metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,116 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.stats.sys.v_swtch.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.stats.sys.v_swtch"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.stats.sys.v_swtch
Plugin: freebsd.plugin
Module: vm.stats.sys.v_swtch
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
CPU context switch
The plugin calls `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.stats.sys.v_swtch instance
The metric count the number of context switches happening on host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.ctxt | switches | context switches/s |
| system.forks | started | processes/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.stats.sys.v_swtch | Enable or disable CPU context switch metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,115 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.stats.vm.v_pgfaults.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.stats.vm.v_pgfaults"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.stats.vm.v_pgfaults
Plugin: freebsd.plugin
Module: vm.stats.vm.v_pgfaults
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect memory page faults events.
The plugin calls `sysctl` function to collect necessary data
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.stats.vm.v_pgfaults instance
The number of page faults happened on host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.pgfaults | memory, io_requiring, cow, cow_optimized, in_transit | page faults/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.stats.vm.v_pgfaults | Enable or disable Memory page fault metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,120 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.stats.vm.v_swappgs.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.stats.vm.v_swappgs"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.stats.vm.v_swappgs
Plugin: freebsd.plugin
Module: vm.stats.vm.v_swappgs
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
The metric swap amount of data read from and written to SWAP.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.stats.vm.v_swappgs instance
This metric shows events happening on SWAP.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.swapio | io, out | KiB/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ 30min_ram_swapped_out ](https://github.com/netdata/netdata/blob/master/health/health.d/swap.conf) | mem.swapio | percentage of the system RAM swapped in the last 30 minutes |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.stats.vm.v_swappgs | Enable or disable infoormation about SWAP I/O metric. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,120 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.swap_info.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.swap_info"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.swap_info
Plugin: freebsd.plugin
Module: vm.swap_info
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect information about SWAP memory.
The plugin calls `sysctlnametomib` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.swap_info instance
This metric shows the SWAP usage.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| mem.swap | free, used | MiB |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ used_swap ](https://github.com/netdata/netdata/blob/master/health/health.d/swap.conf) | mem.swap | swap memory utilization |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| vm.swap_info | Enable or disable SWAP metrics. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,124 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/vm.vmtotal.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "vm.vmtotal"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# vm.vmtotal
Plugin: freebsd.plugin
Module: vm.vmtotal
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect Virtual Memory information from host.
The plugin calls function `sysctl` to collect data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per vm.vmtotal instance
These metrics show an overall vision about processes running.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.active_processes | active | processes |
| system.processes | running, blocked | processes |
| mem.real | used | MiB |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ active_processes ](https://github.com/netdata/netdata/blob/master/health/health.d/processes.conf) | system.active_processes | system process IDs (PID) space utilization |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:vm.vmtotal]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config Options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| enable total processes | Number of active processes. | | False |
| processes running | Show number of processes running or blocked. | | False |
| real memory | Memeory used on host. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -0,0 +1,147 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/integrations/zfs.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freebsd.plugin/metadata.yaml"
sidebar_label: "zfs"
learn_status: "Published"
learn_rel_path: "Data Collection/FreeBSD"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# zfs
Plugin: freebsd.plugin
Module: zfs
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Collect metrics for ZFS filesystem
The plugin uses `sysctl` function to collect necessary data.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per zfs instance
These metrics show detailed information about ZFS filesystem.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| zfs.arc_size | arcsz, target, min, max | MiB |
| zfs.l2_size | actual, size | MiB |
| zfs.reads | arc, demand, prefetch, metadata, l2 | reads/s |
| zfs.bytes | read, write | KiB/s |
| zfs.hits | hits, misses | percentage |
| zfs.hits_rate | hits, misses | events/s |
| zfs.dhits | hits, misses | percentage |
| zfs.dhits_rate | hits, misses | events/s |
| zfs.phits | hits, misses | percentage |
| zfs.phits_rate | hits, misses | events/s |
| zfs.mhits | hits, misses | percentage |
| zfs.mhits_rate | hits, misses | events/s |
| zfs.l2hits | hits, misses | percentage |
| zfs.l2hits_rate | hits, misses | events/s |
| zfs.list_hits | mfu, mfu_ghost, mru, mru_ghost | hits/s |
| zfs.arc_size_breakdown | recent, frequent | percentage |
| zfs.memory_ops | throttled | operations/s |
| zfs.important_ops | evict_skip, deleted, mutex_miss, hash_collisions | operations/s |
| zfs.actual_hits | hits, misses | percentage |
| zfs.actual_hits_rate | hits, misses | events/s |
| zfs.demand_data_hits | hits, misses | percentage |
| zfs.demand_data_hits_rate | hits, misses | events/s |
| zfs.prefetch_data_hits | hits, misses | percentage |
| zfs.prefetch_data_hits_rate | hits, misses | events/s |
| zfs.hash_elements | current, max | elements |
| zfs.hash_chains | current, max | chains |
| zfs.trim_bytes | TRIMmed | bytes |
| zfs.trim_requests | successful, failed, unsupported | requests |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ zfs_memory_throttle ](https://github.com/netdata/netdata/blob/master/health/health.d/zfs.conf) | zfs.memory_ops | number of times ZFS had to limit the ARC growth in the last 10 minutes |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freebsd:zfs_arcstats]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| show zero charts | Do not show charts with zero metrics. | | False |
</details>
#### Examples
There are no configuration examples.

View File

@ -1,287 +0,0 @@
<!--
title: "freeipmi.plugin"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freeipmi.plugin/README.md"
sidebar_label: "freeipmi.plugin"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Devices"
-->
# freeipmi.plugin
Netdata has a [freeipmi](https://www.gnu.org/software/freeipmi/) plugin.
> FreeIPMI provides in-band and out-of-band IPMI software based on the IPMI v1.5/2.0 specification. The IPMI
> specification defines a set of interfaces for platform management and is implemented by a number vendors for system
> management. The features of IPMI that most users will be interested in are sensor monitoring, system event monitoring,
> power control, and serial-over-LAN (SOL).
## Installing the FreeIPMI plugin
When using our official DEB/RPM packages, the FreeIPMI plugin is included in a separate package named
`netdata-plugin-freeipmi` which needs to be manually installed using your system package manager. It is not
installed automatically due to the large number of dependencies it requires.
When using a static build of Netdata, the FreeIPMI plugin will be included and installed automatically, though
you will still need to have FreeIPMI installed on your system to be able to use the plugin.
When using a local build of Netdata, you need to ensure that the FreeIPMI development packages (typically
called `libipmimonitoring-dev`, `libipmimonitoring-devel`, or `freeipmi-devel`) are installed when building Netdata.
### Special Considerations
Accessing IPMI requires root access, so the FreeIPMI plugin is automatically installed setuid root.
FreeIPMI does not work correctly on IBM POWER systems, thus Netdatas FreeIPMI plugin is not usable on such systems.
If you have not previously used IPMI on your system, you will probably need to run the `ipmimonitoring` command as root
to initiailze IPMI settings so that the Netdata plugin works correctly. It should return information about available
seensors on the system.
In some distributions `libipmimonitoring.pc` is located in a non-standard directory, which
can cause building the plugin to fail when building Netdata from source. In that case you
should find the file and link it to the standard pkg-config directory. Usually, running `sudo ln -s
/usr/lib/$(uname -m)-linux-gnu/pkgconfig/libipmimonitoring.pc/libipmimonitoring.pc /usr/lib/pkgconfig/libipmimonitoring.pc`
resolves this issue.
## Metrics
The plugin does a speed test when it starts, to find out the duration needed by the IPMI processor to respond. Depending
on the speed of your IPMI processor, charts may need several seconds to show up on the dashboard.
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### global
These metrics refer to the monitored host.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|----------|:----------:|:------:|
| ipmi.sel | events | events |
### sensor
These metrics refer to the sensor.
Labels:
| Label | Description |
|-----------|-----------------------------------------------------------------------------------------------------------------|
| sensor | Sensor name. Same value as the "Name" column in the `ipmi-sensors` output. |
| type | Sensor type. Same value as the "Type" column in the `ipmi-sensors` output. |
| component | General sensor component. Identified by Netdata based on sensor name and type (e.g. System, Processor, Memory). |
Metrics:
| Metric | Dimensions | Unit |
|-----------------------------|:-----------------------------------:|:----------:|
| ipmi.sensor_state | nominal, critical, warning, unknown | state |
| ipmi.sensor_temperature_c | temperature | Celsius |
| ipmi.sensor_temperature_f | temperature | Fahrenheit |
| ipmi.sensor_voltage | voltage | Volts |
| ipmi.sensor_ampere | ampere | Amps |
| ipmi.sensor_fan_speed | rotations | RPM |
| ipmi.sensor_power | power | Watts |
| ipmi.sensor_reading_percent | percentage | % |
## Alerts
There are 2 alerts:
- The sensor is in a warning or critical state.
- System Event Log (SEL) is non-empty.
## Configuration
The plugin supports a few options. To see them, run:
```text
# ./freeipmi.plugin --help
netdata freeipmi.plugin v1.40.0-137-gf162c25bd
Copyright (C) 2023 Netdata Inc.
Released under GNU General Public License v3 or later.
All rights reserved.
This program is a data collector plugin for netdata.
Available command line options:
SECONDS data collection frequency
minimum: 5
debug enable verbose output
default: disabled
sel
no-sel enable/disable SEL collection
default: enabled
reread-sdr-cache re-read SDR cache on every iteration
default: disabled
interpret-oem-data attempt to parse OEM data
default: disabled
assume-system-event-record
tread illegal SEL events records as normal
default: disabled
ignore-non-interpretable-sensors
do not read sensors that cannot be interpreted
default: disabled
bridge-sensors bridge sensors not owned by the BMC
default: disabled
shared-sensors enable shared sensors, if found
default: disabled
no-discrete-reading do not read sensors that their event/reading type code is invalid
default: enabled
ignore-scanning-disabled
Ignore the scanning bit and read sensors no matter what
default: disabled
assume-bmc-owner assume the BMC is the sensor owner no matter what
(usually bridging is required too)
default: disabled
hostname HOST
username USER
password PASS connect to remote IPMI host
default: local IPMI processor
no-auth-code-check
noauthcodecheck don't check the authentication codes returned
driver-type IPMIDRIVER
Specify the driver type to use instead of doing an auto selection.
The currently available outofband drivers are LAN and LAN_2_0,
which perform IPMI 1.5 and IPMI 2.0 respectively.
The currently available inband drivers are KCS, SSIF, OPENIPMI and SUNBMC.
sdr-cache-dir PATH directory for SDR cache files
default: /tmp
sensor-config-file FILE filename to read sensor configuration
default: system default
sel-config-file FILE filename to read sel configuration
default: system default
ignore N1,N2,N3,... sensor IDs to ignore
default: none
ignore-status N1,N2,N3,... sensor IDs to ignore status (nominal/warning/critical)
default: none
-v
-V
version print version and exit
Linux kernel module for IPMI is CPU hungry.
On Linux run this to lower kipmiN CPU utilization:
# echo 10 > /sys/module/ipmi_si/parameters/kipmid_max_busy_us
or create: /etc/modprobe.d/ipmi.conf with these contents:
options ipmi_si kipmid_max_busy_us=10
For more information:
https://github.com/netdata/netdata/tree/master/collectors/freeipmi.plugin
```
You can set these options in `/etc/netdata/netdata.conf` at this section:
```
[plugin:freeipmi]
update every = 5
command options =
```
Append to `command options =` the settings you need. The minimum `update every` is 5 (enforced internally by the
plugin). IPMI is slow and CPU hungry. So, once every 5 seconds is pretty acceptable.
## Ignoring specific sensors
Specific sensor IDs can be excluded from freeipmi tools by editing `/etc/freeipmi/freeipmi.conf` and setting the IDs to
be ignored at `ipmi-sensors-exclude-record-ids`. **However this file is not used by `libipmimonitoring`** (the library
used by Netdata's `freeipmi.plugin`).
So, `freeipmi.plugin` supports the option `ignore` that accepts a comma separated list of sensor IDs to ignore. To
configure it, edit `/etc/netdata/netdata.conf` and set:
```
[plugin:freeipmi]
command options = ignore 1,2,3,4,...
```
To find the IDs to ignore, run the command `ipmimonitoring`. The first column is the wanted ID:
```
ID | Name | Type | State | Reading | Units | Event
1 | Ambient Temp | Temperature | Nominal | 26.00 | C | 'OK'
2 | Altitude | Other Units Based Sensor | Nominal | 480.00 | ft | 'OK'
3 | Avg Power | Current | Nominal | 100.00 | W | 'OK'
4 | Planar 3.3V | Voltage | Nominal | 3.29 | V | 'OK'
5 | Planar 5V | Voltage | Nominal | 4.90 | V | 'OK'
6 | Planar 12V | Voltage | Nominal | 11.99 | V | 'OK'
7 | Planar VBAT | Voltage | Nominal | 2.95 | V | 'OK'
8 | Fan 1A Tach | Fan | Nominal | 3132.00 | RPM | 'OK'
9 | Fan 1B Tach | Fan | Nominal | 2150.00 | RPM | 'OK'
10 | Fan 2A Tach | Fan | Nominal | 2494.00 | RPM | 'OK'
11 | Fan 2B Tach | Fan | Nominal | 1825.00 | RPM | 'OK'
12 | Fan 3A Tach | Fan | Nominal | 3538.00 | RPM | 'OK'
13 | Fan 3B Tach | Fan | Nominal | 2625.00 | RPM | 'OK'
14 | Fan 1 | Entity Presence | Nominal | N/A | N/A | 'Entity Present'
15 | Fan 2 | Entity Presence | Nominal | N/A | N/A | 'Entity Present'
...
```
## Debugging
You can run the plugin by hand:
```sh
# become user netdata
sudo su -s /bin/sh netdata
# run the plugin in debug mode
/usr/libexec/netdata/plugins.d/freeipmi.plugin 5 debug
```
You will get verbose output on what the plugin does.
## kipmi0 CPU usage
There have been reports that kipmi is showing increased CPU when the IPMI is queried. To lower the CPU consumption of
the system you can issue this command:
```sh
echo 10 > /sys/module/ipmi_si/parameters/kipmid_max_busy_us
```
You can also permanently set the above setting by creating the file `/etc/modprobe.d/ipmi.conf` with this content:
```sh
# prevent kipmi from consuming 100% CPU
options ipmi_si kipmid_max_busy_us=10
```
This instructs the kernel IPMI module to pause for a tick between checking IPMI. Querying IPMI will be a lot slower
now (e.g. several seconds for IPMI to respond), but `kipmi` will not use any noticeable CPU. You can also use a higher
number (this is the number of microseconds to poll IPMI for a response, before waiting for a tick).
If you need to disable IPMI for Netdata, edit `/etc/netdata/netdata.conf` and set:
```
[plugins]
freeipmi = no
```

View File

@ -0,0 +1 @@
integrations/intelligent_platform_management_interface_ipmi.md

View File

@ -0,0 +1,270 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/freeipmi.plugin/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/freeipmi.plugin/metadata.yaml"
sidebar_label: "Intelligent Platform Management Interface (IPMI)"
learn_status: "Published"
learn_rel_path: "Data Collection/Hardware Devices and Sensors"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Intelligent Platform Management Interface (IPMI)
Plugin: freeipmi.plugin
Module: freeipmi
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
"Monitor enterprise server sensor readings, event log entries, and hardware statuses to ensure reliable server operations."
The plugin uses open source library IPMImonitoring to communicate with sensors.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
The plugin needs setuid.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
Linux kernel module for IPMI can create big overhead.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
The plugin does a speed test when it starts, to find out the duration needed by the IPMI processor to respond. Depending on the speed of your IPMI processor, charts may need several seconds to show up on the dashboard.
### Per Intelligent Platform Management Interface (IPMI) instance
These metrics refer to the entire monitored application.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipmi.sel | events | events |
### Per sensor
Labels:
| Label | Description |
|:-----------|:----------------|
| sensor | The sensor name |
| type | One of 45 recognized sensor types (Battery, Voltage...) |
| component | One of 25 recognized components (Processor, Peripheral). |
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ipmi.sensor_state | nominal, critical, warning, unknown | state |
| ipmi.sensor_temperature_c | temperature | Celsius |
| ipmi.sensor_temperature_f | temperature | Fahrenheit |
| ipmi.sensor_voltage | voltage | Volts |
| ipmi.sensor_ampere | ampere | Amps |
| ipmi.sensor_fan_speed | rotations | RPM |
| ipmi.sensor_power | power | Watts |
| ipmi.sensor_reading_percent | percentage | % |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ ipmi_sensor_state ](https://github.com/netdata/netdata/blob/master/health/health.d/ipmi.conf) | ipmi.sensor_state | IPMI sensor ${label:sensor} (${label:component}) state |
## Setup
### Prerequisites
#### Install freeipmi.plugin
When using our official DEB/RPM packages, the FreeIPMI plugin is included in a separate package named `netdata-plugin-freeipmi` which needs to be manually installed using your system package manager. It is not installed automatically due to the large number of dependencies it requires.
When using a static build of Netdata, the FreeIPMI plugin will be included and installed automatically, though you will still need to have FreeIPMI installed on your system to be able to use the plugin.
When using a local build of Netdata, you need to ensure that the FreeIPMI development packages (typically called `libipmimonitoring-dev`, `libipmimonitoring-devel`, or `freeipmi-devel`) are installed when building Netdata.
#### Preliminary actions
If you have not previously used IPMI on your system, you will probably need to run the `ipmimonitoring` command as root
to initialize IPMI settings so that the Netdata plugin works correctly. It should return information about available sensors on the system.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
Configuration for this specific integration is located in the `[plugin:freeipmi]` section within that file.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
The configuration is set using command line options:
```
# netdata.conf
[plugin:freeipmi]
command options = opt1 opt2 ... optN
```
To display a help message listing the available command line options:
```bash
./usr/libexec/netdata/plugins.d/freeipmi.plugin --help
```
<details><summary>Command options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| SECONDS | Data collection frequency. | | False |
| debug | Enable verbose output. | | False |
| no-sel | Disable System Event Log (SEL) collection. | | False |
| reread-sdr-cache | Re-read SDR cache on every iteration. | | False |
| interpret-oem-data | Attempt to parse OEM data. | | False |
| assume-system-event-record | treat illegal SEL events records as normal. | | False |
| ignore-non-interpretable-sensors | Do not read sensors that cannot be interpreted. | | False |
| bridge-sensors | Bridge sensors not owned by the BMC. | | False |
| shared-sensors | Enable shared sensors if found. | | False |
| no-discrete-reading | Do not read sensors if their event/reading type code is invalid. | | False |
| ignore-scanning-disabled | Ignore the scanning bit and read sensors no matter what. | | False |
| assume-bmc-owner | Assume the BMC is the sensor owner no matter what (usually bridging is required too). | | False |
| hostname HOST | Remote IPMI hostname or IP address. | | False |
| username USER | Username that will be used when connecting to the remote host. | | False |
| password PASS | Password that will be used when connecting to the remote host. | | False |
| noauthcodecheck / no-auth-code-check | Don't check the authentication codes returned. | | False |
| driver-type IPMIDRIVER | Specify the driver type to use instead of doing an auto selection. The currently available outofband drivers are LAN and LAN_2_0, which perform IPMI 1.5 and IPMI 2.0 respectively. The currently available inband drivers are KCS, SSIF, OPENIPMI and SUNBMC. | | False |
| sdr-cache-dir PATH | SDR cache files directory. | | False |
| sensor-config-file FILE | Sensors configuration filename. | | False |
| sel-config-file FILE | SEL configuration filename. | | False |
| ignore N1,N2,N3,... | Sensor IDs to ignore. | | False |
| ignore-status N1,N2,N3,... | Sensor IDs to ignore status (nominal/warning/critical). | | False |
| -v | Print version and exit. | | False |
| --help | Print usage message and exit. | | False |
</details>
#### Examples
##### Decrease data collection frequency
Basic example decreasing data collection frequency. The minimum `update every` is 5 (enforced internally by the plugin). IPMI is slow and CPU hungry. So, once every 5 seconds is pretty acceptable.
```yaml
[plugin:freeipmi]
update every = 10
```
##### Disable SEL collection
Append to `command options =` the options you need.
<details><summary>Config</summary>
```yaml
[plugin:freeipmi]
command options = no-sel
```
</details>
##### Ignore specific sensors
Specific sensor IDs can be excluded from freeipmi tools by editing `/etc/freeipmi/freeipmi.conf` and setting the IDs to be ignored at `ipmi-sensors-exclude-record-ids`.
**However this file is not used by `libipmimonitoring`** (the library used by Netdata's `freeipmi.plugin`).
To find the IDs to ignore, run the command `ipmimonitoring`. The first column is the wanted ID:
ID | Name | Type | State | Reading | Units | Event
1 | Ambient Temp | Temperature | Nominal | 26.00 | C | 'OK'
2 | Altitude | Other Units Based Sensor | Nominal | 480.00 | ft | 'OK'
3 | Avg Power | Current | Nominal | 100.00 | W | 'OK'
4 | Planar 3.3V | Voltage | Nominal | 3.29 | V | 'OK'
5 | Planar 5V | Voltage | Nominal | 4.90 | V | 'OK'
6 | Planar 12V | Voltage | Nominal | 11.99 | V | 'OK'
7 | Planar VBAT | Voltage | Nominal | 2.95 | V | 'OK'
8 | Fan 1A Tach | Fan | Nominal | 3132.00 | RPM | 'OK'
9 | Fan 1B Tach | Fan | Nominal | 2150.00 | RPM | 'OK'
10 | Fan 2A Tach | Fan | Nominal | 2494.00 | RPM | 'OK'
11 | Fan 2B Tach | Fan | Nominal | 1825.00 | RPM | 'OK'
12 | Fan 3A Tach | Fan | Nominal | 3538.00 | RPM | 'OK'
13 | Fan 3B Tach | Fan | Nominal | 2625.00 | RPM | 'OK'
14 | Fan 1 | Entity Presence | Nominal | N/A | N/A | 'Entity Present'
15 | Fan 2 | Entity Presence | Nominal | N/A | N/A | 'Entity Present'
...
`freeipmi.plugin` supports the option `ignore` that accepts a comma separated list of sensor IDs to ignore. To configure it set on `netdata.conf`:
<details><summary>Config</summary>
```yaml
[plugin:freeipmi]
command options = ignore 1,2,3,4,...
```
</details>
## Troubleshooting
### Debug Mode
### kimpi0 CPU usage

View File

@ -1,36 +0,0 @@
<!--
title: "idlejitter.plugin"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/idlejitter.plugin/README.md"
sidebar_label: "idlejitter.plugin"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/QoS"
-->
# idlejitter.plugin
Idle jitter is a measure of delays in timing for user processes caused by scheduling limitations.
## How Netdata measures idle jitter
A thread is spawned that requests to sleep for 20000 microseconds (20ms).
When the system wakes it up, it measures how many microseconds have passed.
The difference between the requested and the actual duration of the sleep, is the idle jitter.
This is done at most 50 times per second, to ensure we have a good average.
This number is useful:
- In multimedia-streaming environments such as VoIP gateways, where the CPU jitter can affect the quality of the service.
- On time servers and other systems that require very precise timing, where CPU jitter can actively interfere with timing precision.
- On gaming systems, where CPU jitter can cause frame drops and stuttering.
- In cloud infrastructure that can pause the VM or container for a small duration to perform operations at the host.
## Charts
idlejitter.plugin generates the idlejitter chart which measures CPU idle jitter in milliseconds lost per second.
## Configuration
This chart is available without any configuration.

View File

@ -0,0 +1 @@
integrations/idle_os_jitter.md

View File

@ -0,0 +1,114 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/idlejitter.plugin/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/idlejitter.plugin/metadata.yaml"
sidebar_label: "Idle OS Jitter"
learn_status: "Published"
learn_rel_path: "Data Collection/Synthetic Checks"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# Idle OS Jitter
Plugin: idlejitter.plugin
Module: idlejitter.plugin
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor delays in timing for user processes caused by scheduling limitations to optimize the system to run latency sensitive applications with minimal jitter, improving consistency and quality of service.
A thread is spawned that requests to sleep for fixed amount of time. When the system wakes it up, it measures how many microseconds have passed. The difference between the requested and the actual duration of the sleep, is the idle jitter. This is done dozens of times per second to ensure we have a representative sample.
This collector is supported on all platforms.
This collector only supports collecting metrics from a single instance of this integration.
### Default Behavior
#### Auto-Detection
This integration will run by default on all supported systems.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per Idle OS Jitter instance
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.idlejitter | min, max, average | microseconds lost/s |
## Alerts
There are no alerts configured by default for this integration.
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
This integration only supports a single configuration option, and most users will not need to change it.
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| loop time in ms | Specifies the target time for the data collection thread to sleep, measured in miliseconds.
| | False |
#### Examples
There are no configuration examples.

View File

@ -1,80 +0,0 @@
# Monitor I/O latency using ioping.plugin
The ioping plugin supports monitoring I/O latency for any number of directories/files/devices, by pinging them with `ioping`.
A recent version of `ioping` is required (one that supports option `-N`).
The supplied plugin can install it, by running:
```sh
/usr/libexec/netdata/plugins.d/ioping.plugin install
```
The `-e` option can be supplied to indicate where the Netdata environment file is installed. The default path is `/etc/netdata/.environment`.
The above will download, build and install the right version as `/usr/libexec/netdata/plugins.d/ioping`.
Then you need to edit `/etc/netdata/ioping.conf` (to edit it on your system run
`/etc/netdata/edit-config ioping.conf`) like this:
```sh
# uncomment the following line - it should already be there
ioping="/usr/libexec/netdata/plugins.d/ioping"
# set here the directory/file/device, you need to ping
destination="destination"
# override the chart update frequency - the default is inherited from Netdata
update_every="1s"
# the request size in bytes to ping the destination
request_size="4k"
# other iping options - these are the defaults
ioping_opts="-T 1000000 -R"
```
## alerts
Netdata will automatically attach a few alerts for each host.
Check the [latest versions of the ioping alerts](https://raw.githubusercontent.com/netdata/netdata/master/health/health.d/ioping.conf)
## Multiple ioping Plugins With Different Settings
You may need to run multiple ioping plugins with different settings or different end points.
For example, you may need to ping one destination once per 10 seconds, and another once per second.
Netdata allows you to add as many `ioping` plugins as you like.
Follow this procedure:
**1. Create New ioping Configuration File**
```sh
# Step Into Configuration Directory
cd /etc/netdata
# Copy Original ioping Configuration File To New Configuration File
cp ioping.conf ioping2.conf
```
Edit `ioping2.conf` and set the settings and the destination you need for the seconds instance.
**2. Soft Link Original ioping Plugin to New Plugin File**
```sh
# Become root (If The Step Step Is Performed As Non-Root User)
sudo su
# Step Into The Plugins Directory
cd /usr/libexec/netdata/plugins.d
# Link ioping.plugin to ioping2.plugin
ln -s ioping.plugin ioping2.plugin
```
That's it. Netdata will detect the new plugin and start it.
You can name the new plugin any name you like.
Just make sure the plugin and the configuration file have the same name.

View File

@ -0,0 +1 @@
integrations/ioping.md

View File

@ -0,0 +1,128 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/ioping.plugin/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/ioping.plugin/metadata.yaml"
sidebar_label: "IOPing"
learn_status: "Published"
learn_rel_path: "Data Collection/Synthetic Checks"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# IOPing
Plugin: ioping.plugin
Module: ioping.plugin
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor IOPing metrics for efficient disk I/O latency tracking. Keep track of read/write speeds, latency, and error rates for optimized disk operations.
Plugin uses `ioping` command.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per disk
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| ioping.latency | latency | microseconds |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ ioping_disk_latency ](https://github.com/netdata/netdata/blob/master/health/health.d/ioping.conf) | ioping.latency | average I/O latency over the last 10 seconds |
## Setup
### Prerequisites
#### Install ioping
You can install the command by passing the argument `install` to the plugin (`/usr/libexec/netdata/plugins.d/ioping.plugin install`).
### Configuration
#### File
The configuration file name for this integration is `ioping.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config ioping.conf
```
#### Options
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update_every | Data collection frequency. | | False |
| destination | The directory/file/device to ioping. | | True |
| request_size | The request size in bytes to ioping the destination (symbolic modifiers are supported) | | False |
| ioping_opts | Options passed to `ioping` commands. | | False |
</details>
#### Examples
##### Basic Configuration
This example has the minimum configuration necessary to have the plugin running.
<details><summary>Config</summary>
```yaml
destination="/dev/sda"
```
</details>

View File

@ -1,16 +0,0 @@
<!--
title: "macos.plugin"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/macos.plugin/README.md"
sidebar_label: "macos.plugin"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/System metrics"
-->
# macos.plugin
Collects resource usage and performance data on macOS systems
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.

View File

@ -0,0 +1 @@
integrations/macos.md

View File

@ -0,0 +1,281 @@
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/macos.plugin/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/collectors/macos.plugin/metadata.yaml"
sidebar_label: "macOS"
learn_status: "Published"
learn_rel_path: "Data Collection/macOS Systems"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
# macOS
Plugin: macos.plugin
Module: mach_smi
<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
## Overview
Monitor macOS metrics for efficient operating system performance.
The plugin uses three different methods to collect data:
- The function `sysctlbyname` is called to collect network, swap, loadavg, and boot time.
- The functtion `host_statistic` is called to collect CPU and Virtual memory data;
- The function `IOServiceGetMatchingServices` to collect storage information.
This collector is only supported on the following platforms:
- macOS
This collector only supports collecting metrics from a single instance of this integration.
### Default Behavior
#### Auto-Detection
This integration doesn't support auto-detection.
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per macOS instance
These metrics refer to hardware and network monitoring.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| system.cpu | user, nice, system, idle | percentage |
| system.ram | active, wired, throttled, compressor, inactive, purgeable, speculative, free | MiB |
| mem.swapio | io, out | KiB/s |
| mem.pgfaults | memory, cow, pagein, pageout, compress, decompress, zero_fill, reactivate, purge | faults/s |
| system.load | load1, load5, load15 | load |
| mem.swap | free, used | MiB |
| system.ipv4 | received, sent | kilobits/s |
| ipv4.tcppackets | received, sent | packets/s |
| ipv4.tcperrors | InErrs, InCsumErrors, RetransSegs | packets/s |
| ipv4.tcphandshake | EstabResets, ActiveOpens, PassiveOpens, AttemptFails | events/s |
| ipv4.tcpconnaborts | baddata, userclosed, nomemory, timeout | connections/s |
| ipv4.tcpofo | inqueue | packets/s |
| ipv4.tcpsyncookies | received, sent, failed | packets/s |
| ipv4.ecnpkts | CEP, NoECTP | packets/s |
| ipv4.udppackets | received, sent | packets/s |
| ipv4.udperrors | RcvbufErrors, InErrors, NoPorts, InCsumErrors, IgnoredMulti | events/s |
| ipv4.icmp | received, sent | packets/s |
| ipv4.icmp_errors | InErrors, OutErrors, InCsumErrors | packets/s |
| ipv4.icmpmsg | InEchoReps, OutEchoReps, InEchos, OutEchos | packets/s |
| ipv4.packets | received, sent, forwarded, delivered | packets/s |
| ipv4.fragsout | ok, failed, created | packets/s |
| ipv4.fragsin | ok, failed, all | packets/s |
| ipv4.errors | InDiscards, OutDiscards, InHdrErrors, OutNoRoutes, InAddrErrors, InUnknownProtos | packets/s |
| ipv6.packets | received, sent, forwarded, delivers | packets/s |
| ipv6.fragsout | ok, failed, all | packets/s |
| ipv6.fragsin | ok, failed, timeout, all | packets/s |
| ipv6.errors | InDiscards, OutDiscards, InHdrErrors, InAddrErrors, InTruncatedPkts, InNoRoutes, OutNoRoutes | packets/s |
| ipv6.icmp | received, sent | messages/s |
| ipv6.icmpredir | received, sent | redirects/s |
| ipv6.icmperrors | InErrors, OutErrors, InCsumErrors, InDestUnreachs, InPktTooBigs, InTimeExcds, InParmProblems, OutDestUnreachs, OutTimeExcds, OutParmProblems | errors/s |
| ipv6.icmpechos | InEchos, OutEchos, InEchoReplies, OutEchoReplies | messages/s |
| ipv6.icmprouter | InSolicits, OutSolicits, InAdvertisements, OutAdvertisements | messages/s |
| ipv6.icmpneighbor | InSolicits, OutSolicits, InAdvertisements, OutAdvertisements | messages/s |
| ipv6.icmptypes | InType1, InType128, InType129, InType136, OutType1, OutType128, OutType129, OutType133, OutType135, OutType143 | messages/s |
| system.uptime | uptime | seconds |
| system.io | in, out | KiB/s |
### Per disk
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| disk.io | read, writes | KiB/s |
| disk.ops | read, writes | operations/s |
| disk.util | utilization | % of time working |
| disk.iotime | reads, writes | milliseconds/s |
| disk.await | reads, writes | milliseconds/operation |
| disk.avgsz | reads, writes | KiB/operation |
| disk.svctm | svctm | milliseconds/operation |
### Per mount point
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| disk.space | avail, used, reserved_for_root | GiB |
| disk.inodes | avail, used, reserved_for_root | inodes |
### Per network device
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| net.net | received, sent | kilobits/s |
| net.packets | received, sent, multicast_received, multicast_sent | packets/s |
| net.errors | inbound, outbound | errors/s |
| net.drops | inbound | drops/s |
| net.events | frames, collisions, carrier | events/s |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ interface_speed ](https://github.com/netdata/netdata/blob/master/health/health.d/net.conf) | net.net | network interface ${label:device} current speed |
## Setup
### Prerequisites
No action required.
### Configuration
#### File
The configuration file name for this integration is `netdata.conf`.
The file format is a modified INI syntax. The general structure is:
```toml
[section1]
option 1 = some value
option 2 = some other value
[section2]
option 3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
#### Options
There are three sections in the file which you can configure:
- `[plugin:macos:sysctl]` - Enable or disable monitoring for network, swap, loadavg, and boot time.
- `[plugin:macos:mach_smi]` - Enable or disable monitoring for CPU and Virtual memory.
- `[plugin:macos:iokit]` - Enable or disable monitoring for storage device.
<details><summary>Config options</summary>
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| enable load average | Enable or disable monitoring of load average metrics (load1, load5, load15). | | False |
| system swap | Enable or disable monitoring of system swap metrics (free, used). | | False |
| bandwidth | Enable or disable monitoring of network bandwidth metrics (received, sent). | | False |
| ipv4 TCP packets | Enable or disable monitoring of IPv4 TCP total packets metrics (received, sent). | | False |
| ipv4 TCP errors | Enable or disable monitoring of IPv4 TCP packets metrics (Input Errors, Checksum, Retransmission segments). | | False |
| ipv4 TCP handshake issues | Enable or disable monitoring of IPv4 TCP handshake metrics (Established Resets, Active Opens, Passive Opens, Attempt Fails). | | False |
| ECN packets | Enable or disable monitoring of ECN statistics metrics (InCEPkts, InNoECTPkts). | | False |
| TCP SYN cookies | Enable or disable monitoring of TCP SYN cookies metrics (received, sent, failed). | | False |
| TCP out-of-order queue | Enable or disable monitoring of TCP out-of-order queue metrics (inqueue). | | False |
| TCP connection aborts | Enable or disable monitoring of TCP connection aborts metrics (Bad Data, User closed, No memory, Timeout). | | False |
| ipv4 UDP packets | Enable or disable monitoring of ipv4 UDP packets metrics (sent, received.). | | False |
| ipv4 UDP errors | Enable or disable monitoring of ipv4 UDP errors metrics (Recieved Buffer error, Input Errors, No Ports, IN Checksum Errors, Ignore Multi). | | False |
| ipv4 icmp packets | Enable or disable monitoring of IPv4 ICMP packets metrics (sent, received, in error, OUT error, IN Checksum error). | | False |
| ipv4 icmp messages | Enable or disable monitoring of ipv4 ICMP messages metrics (I/O messages, I/O Errors, In Checksum). | | False |
| ipv4 packets | Enable or disable monitoring of ipv4 packets metrics (received, sent, forwarded, delivered). | | False |
| ipv4 fragments sent | Enable or disable monitoring of IPv4 fragments sent metrics (ok, fails, creates). | | False |
| ipv4 fragments assembly | Enable or disable monitoring of IPv4 fragments assembly metrics (ok, failed, all). | | False |
| ipv4 errors | Enable or disable monitoring of IPv4 errors metrics (I/O discard, I/O HDR errors, In Addr errors, In Unknown protos, OUT No Routes). | | False |
| ipv6 packets | Enable or disable monitoring of IPv6 packets metrics (received, sent, forwarded, delivered). | | False |
| ipv6 fragments sent | Enable or disable monitoring of IPv6 fragments sent metrics (ok, failed, all). | | False |
| ipv6 fragments assembly | Enable or disable monitoring of IPv6 fragments assembly metrics (ok, failed, timeout, all). | | False |
| ipv6 errors | Enable or disable monitoring of IPv6 errors metrics (I/O Discards, In Hdr Errors, In Addr Errors, In Truncaedd Packets, I/O No Routes). | | False |
| icmp | Enable or disable monitoring of ICMP metrics (sent, received). | | False |
| icmp redirects | Enable or disable monitoring of ICMP redirects metrics (received, sent). | | False |
| icmp errors | Enable or disable monitoring of ICMP metrics (I/O Errors, In Checksums, In Destination Unreachable, In Packet too big, In Time Exceeds, In Parm Problem, Out Dest Unreachable, Out Timee Exceeds, Out Parm Problems.). | | False |
| icmp echos | Enable or disable monitoring of ICMP echos metrics (I/O Echos, I/O Echo Reply). | | False |
| icmp router | Enable or disable monitoring of ICMP router metrics (I/O Solicits, I/O Advertisements). | | False |
| icmp neighbor | Enable or disable monitoring of ICMP neighbor metrics (I/O Solicits, I/O Advertisements). | | False |
| icmp types | Enable or disable monitoring of ICMP types metrics (I/O Type1, I/O Type128, I/O Type129, Out Type133, Out Type135, In Type136, Out Type145). | | False |
| space usage for all disks | Enable or disable monitoring of space usage for all disks metrics (available, used, reserved for root). | | False |
| inodes usage for all disks | Enable or disable monitoring of inodes usage for all disks metrics (available, used, reserved for root). | | False |
| bandwidth | Enable or disable monitoring of bandwidth metrics (received, sent). | | False |
| system uptime | Enable or disable monitoring of system uptime metrics (uptime). | | False |
| cpu utilization | Enable or disable monitoring of CPU utilization metrics (user, nice, system, idel). | | False |
| system ram | Enable or disable monitoring of system RAM metrics (Active, Wired, throttled, compressor, inactive, purgeable, speculative, free). | | False |
| swap i/o | Enable or disable monitoring of SWAP I/O metrics (I/O Swap). | | False |
| memory page faults | Enable or disable monitoring of memory page faults metrics (memory, cow, I/O page, compress, decompress, zero fill, reactivate, purge). | | False |
| disk i/o | Enable or disable monitoring of disk I/O metrics (In, Out). | | False |
</details>
#### Examples
##### Disable swap monitoring.
A basic example that discards swap monitoring
<details><summary>Config</summary>
```yaml
[plugin:macos:sysctl]
system swap = no
[plugin:macos:mach_smi]
swap i/o = no
```
</details>
##### Disable complete Machine SMI section.
A basic example that discards swap monitoring
<details><summary>Config</summary>
```yaml
[plugin:macos:mach_smi]
cpu utilization = no
system ram = no
swap i/o = no
memory page faults = no
disk i/o = no
```
</details>

View File

@ -1,63 +0,0 @@
<!--
title: "Monitor Netfilter statistics (nfacct.plugin)"
custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/nfacct.plugin/README.md"
sidebar_label: "Netfilter statistics (nfacct.plugin)"
learn_status: "Published"
learn_topic_type: "References"
learn_rel_path: "Integrations/Monitor/Networking"
-->
# Monitor Netfilter statistics (nfacct.plugin)
`nfacct.plugin` collects Netfilter statistics.
## Prerequisites
If you are using [our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md), install the
`netdata-plugin-nfacct` package using your system package manager.
If you built Netdata locally:
1. install `libmnl-dev` and `libnetfilter-acct-dev` using the package manager of your system.
2. re-install Netdata from source. The installer will detect that the required libraries are now available and will also build `netdata.plugin`.
Keep in mind that NFACCT requires root access, so the plugin is setuid to root.
## Charts
The plugin provides Netfilter connection tracker statistics and nfacct packet and bandwidth accounting:
Connection tracker:
1. Connections.
2. Changes.
3. Expectations.
4. Errors.
5. Searches.
Netfilter accounting:
1. Packets.
2. Bandwidth.
## Configuration
If you need to disable NFACCT for Netdata, edit /etc/netdata/netdata.conf and set:
```
[plugins]
nfacct = no
```
## Debugging
You can run the plugin by hand:
```
sudo /usr/libexec/netdata/plugins.d/nfacct.plugin 1 debug
```
You will get verbose output on what the plugin does.

View File

@ -0,0 +1 @@
integrations/netfilter.md

Some files were not shown because too many files have changed in this diff Show More