Documentation changes, new files and restructuring the hierarchy (#17014)

* docs additions

* docs from writerside, not to be published in this state, links need work and Learn map needs to include them

* rename some files to reduce repetition on filenames

* use new packaging documentation and replace links to old

* change learn-rel-path to new category names

* replace configuration file with new one, add conf directory section to it, and replace links to point to that

* linkfix

* run integrations pipeline to get new links

* catoverpage

* fix writerside style links

* addition in on-prem mention

* comment out mermaid problematic line

* change path of alerting integrations docs for Learn

* fix

* fixes

* fix diagrams
This commit is contained in:
Fotis Voutsas 2024-02-20 09:08:46 +02:00 committed by GitHub
parent 3da2004b82
commit f27f4f714a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
281 changed files with 3171 additions and 1179 deletions

View File

@ -0,0 +1,3 @@
# Developer and Contributor Corner
In this section of our Documentation you will find more advanced information, suited for developers and contributors alike.

View File

@ -1,7 +1,7 @@
# Installation
In this category you can find instructions on all the possible ways you can install Netdata on the
[supported platforms](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md).
[supported platforms](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/versions-and-platforms.md).
If this is your first time using Netdata, we recommend that you first start with the
[quick installation guide](https://github.com/netdata/netdata/edit/master/packaging/installer/README.md) and then

View File

@ -305,7 +305,7 @@ Don't include full paths, beginning from the system's root (`/`), as these might
| | |
|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Not recommended | Use `edit-config` to edit Netdata's configuration: `sudo /etc/netdata/edit-config netdata.conf`. |
| **Recommended** | Use `edit-config` to edit Netdata's configuration by first navigating to your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory), which is typically at `/etc/netdata`, then running `sudo edit-config netdata.conf`. |
| **Recommended** | Use `edit-config` to edit Netdata's configuration by first navigating to your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory), which is typically at `/etc/netdata`, then running `sudo edit-config netdata.conf`. |
### `sudo`

View File

@ -0,0 +1,25 @@
# Deployment Guides
Netdata can be used to monitor all kinds of infrastructure, from stand-alone tiny IoT devices to complex hybrid setups combining on-premise and cloud infrastructure, mixing bare-metal servers, virtual machines and containers.
There are 3 components to structure your Netdata ecosystem:
1. **Netdata Agents**
To monitor the physical or virtual nodes of your infrastructure, including all applications and containers running on them.
Netdata Agents are Open-Source, licensed under GPL v3+.
2. **Netdata Parents**
To create [observability centralization points](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/README.md) within your infrastructure, to offload Netdata Agents functions from your production systems, to provide high-availability of your data, increased data retention and isolation of your nodes.
Netdata Parents are implemented using the Netdata Agent software. Any Netdata Agent can be an Agent for a node and a Parent for other Agents, at the same time.
It is recommended to set up multiple Netdata Parents. They will all seamlessly be integrated by Netdata Cloud into one monitoring solution.
3. **Netdata Cloud**
Our SaaS, combining all your infrastructure, all your Netdata Agents and Parents, into one uniform, distributed, scalable, monitoring database, offering advanced data slicing and dicing capabilities, custom dashboards, advanced troubleshooting tools, user management, centralized management of alerts, and more.
The Netdata Agent is a highly modular software piece, providing data collection via numerous plugins, an in-house crafted time-series database, a query engine, health monitoring and alerts, machine learning and anomaly detection, metrics exporting to third party systems.

View File

@ -0,0 +1,121 @@
# Deployment with Centralization Points
An observability centralization point can centralize both metrics and logs. The sending systems are called Children, while the receiving systems are called a Parents.
When metrics and logs are centralized, the Children are never queried for metrics and logs. The Netdata Parents have all the data needed to satisfy queries.
- **Metrics** are centralized by Netdata, with a feature we call **Streaming**. The Parents listen for incoming connections and permit access only to Children that connect to it with the right API key. Children are configured to push their metrics to the Parents and they initiate the connections to do so.
- **Logs** are centralized with methodologies provided by `systemd-journald`. This involves installing `systemd-journal-remote` on both the Parent and the Children, and configuring the keys required for this communication.
| Feature | How it works |
|:---------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|
| Unified infrastructure dashboards for metrics | Yes, at Netdata Cloud |
| Unified infrastructure dashboards for logs | All logs are accessible via the same dashboard at Netdata Cloud, although they are unified per Netdata Parent |
| Centrally configured alerts | Yes, at Netdata Parents |
| Centrally dispatched alert notifications | Yes, at Netdata Cloud |
| Data are exclusively on-prem | Yes, Netdata Cloud queries Netdata Agents to satisfy dashboard queries. |
A configuration with 2 observability centralization points, looks like this:
```mermaid
flowchart LR
WEB[["One unified
dashboard
for all nodes"]]
NC(["<b>Netdata Cloud</b>
decides which agents
need to be queried"])
SA1["Netdata at AWS
A1"]
SA2["Netdata at AWS
A2"]
SAN["Netdata at AWS
AN"]
PA["<b>Netdata Parent A</b>
at AWS
having all metrics & logs
for all Ax nodes"]
SB1["Netdata On-Prem
B1"]
SB2["Netdata On-Prem
B2"]
SBN["Netdata On-Prem
BN"]
PB["<b>Netdata Parent B</b>
On-Prem
having all metrics & logs
for all Bx nodes"]
WEB -->|query| NC -->|query| PA & PB
PA ---|stream| SA1 & SA2 & SAN
PB ---|stream| SB1 & SB2 & SBN
```
Netdata Cloud queries the Netdata Parents to provide aggregated dashboard views.
For alerts, the dispatch of notifications looks like in the following chart:
```mermaid
flowchart LR
NC(["<b>Netdata Cloud</b>
applies silencing
& user settings"])
SA1["Netdata at AWS
A1"]
SA2["Netdata at AWS
A2"]
SAN["Netdata at AWS
AN"]
PA["<b>Netdata Parent A</b>
at AWS
having all metrics & logs
for all Ax nodes"]
SB1["Netdata On-Prem
B1"]
SB2["Netdata On-Prem
B2"]
SBN["Netdata On-Prem
BN"]
PB["<b>Netdata Parent B</b>
On-Prem
having all metrics & logs
for all Bx nodes"]
EMAIL{{"<b>e-mail</b>
notifications"}}
MOBILEAPP{{"<b>Netdata Mobile App</b>
notifications"}}
SLACK{{"<b>Slack</b>
notifications"}}
OTHER{{"Other
notifications"}}
PA & PB -->|alert transitions| NC -->|notification| EMAIL & MOBILEAPP & SLACK & OTHER
SA1 & SA2 & SAN ---|stream| PA
SB1 & SB2 & SBN ---|stream| PB
```
### Configuration steps for deploying Netdata with Observability Centralization Points
For Metrics:
- Install Netdata agents on all systems and the Netdata Parents.
- Configure `stream.conf` at the Netdata Parents to enable streaming access with an API key.
- Configure `stream.conf` at the Netdata Children to enable streaming to the configured Netdata Parents.
For Logs:
- Install `systemd-journal-remote` on all systems and the Netdata Parents.
- Configure `systemd-journal-remote` at the Netdata Parents to enable logs reception.
- Configure `systemd-journal-upload` at the Netdata Children to enable transmission of their logs to the Netdata Parents.
Optionally:
- Disable ML, health checks and dashboard access at Netdata Children to save resources and avoid duplicate notifications.
When using Netdata Cloud:
- Optionally: disable dashboard access on all Netdata agents (including Netdata Parents).
- Optionally: disable alert notifications on all Netdata agents (including Netdata Parents).

View File

@ -0,0 +1,139 @@
# Standalone Deployment
To help our users have a complete experience of Netdata when they install it for the first time, a Netdata Agent with default configuration is a complete monitoring solution out of the box, having all its features enabled and available.
So, each Netdata agent acts as a standalone monitoring system by default.
## Standalone agents, without Netdata Cloud
| Feature | How it works |
|:---------------------------------------------:|:----------------------------------------------------:|
| Unified infrastructure dashboards for metrics | No, each Netdata agent provides its own dashboard |
| Unified infrastructure dashboards for logs | No, each Netdata agent exposes its own logs |
| Centrally configured alerts | No, each Netdata has its own alerts configuration |
| Centrally dispatched alert notifications | No, each Netdata agent sends notifications by itself |
| Data are exclusively on-prem | Yes |
When using Standalone Netdata agents, each of them offers an API and a dashboard, at its own unique URL, that looks like `http://agent-ip:19999`.
So, each of the Netdata agents has to be accessed individually and independently of the others:
```mermaid
flowchart LR
WEB[["Multiple
Independent
Dashboards"]]
S1["Standalone
Netdata
1"]
S2["Standalone
Netdata
2"]
SN["Standalone
Netdata
N"]
WEB -->|URL 1| S1
WEB -->|URL 2| S2
WEB -->|URL N| SN
```
The same is true for alert notifications. Each of the Netdata agents runs its own alerts and sends notifications by itself, according to its configuration:
```mermaid
flowchart LR
S1["Standalone
Netdata
1"]
S2["Standalone
Netdata
2"]
SN["Standalone
Netdata
N"]
EMAIL{{"<b>e-mail</b>
notifications"}}
SLACK{{"<b>Slack</b>
notifications"}}
OTHER{{"Other
notifications"}}
S1 & S2 & SN .-> SLACK
S1 & S2 & SN ---> EMAIL
S1 & S2 & SN ==> OTHER
```
### Configuration steps for standalone Netdata agents without Netdata Cloud
No special configuration needed.
- Install Netdata agents on all your systems, then access each of them via its own unique URL, that looks like `http://agent-ip:19999/`.
## Standalone agents, with Netdata Cloud
| Feature | How it works |
|:---------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Unified infrastructure dashboards for metrics | Yes, via Netdata Cloud, all charts aggregate metrics from all servers. |
| Unified infrastructure dashboards for logs | All logs are accessible via the same dashboard at Netdata Cloud, although they are not unified (ie. logs from different servers are not multiplexed into a single view) |
| Centrally configured alerts | No, each Netdata has its own alerts configuration |
| Centrally dispatched alert notifications | Yes, via Netdata Cloud |
| Data are exclusively on-prem | Yes, Netdata Cloud queries Netdata Agents to satisfy dashboard queries. |
By [connecting all Netdata agents to Netdata Cloud](https://github.com/netdata/netdata/blob/master/src/claim/README.md), you can have a unified infrastructure view of all your nodes, with aggregated charts, without configuring [observability centralization points](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/README.md).
```mermaid
flowchart LR
WEB[["One unified
dashboard
for all nodes"]]
NC(["<b>Netdata Cloud</b>
decides which agents
need to be queried"])
S1["Standalone
Netdata
1"]
S2["Standalone
Netdata
2"]
SN["Standalone
Netdata
N"]
WEB -->|queries| NC
NC -->|queries| S1 & S2 & SN
```
Similarly for alerts, Netdata Cloud receives all alert transitions from all agents, decides which notifications should be sent and how, applies silencing rules, maintenance windows and based on each Netdata Cloud space and user settings, dispatches notifications:
```mermaid
flowchart LR
EMAIL{{"<b>e-mail</b>
notifications"}}
MOBILEAPP{{"<b>Netdata Mobile App</b>
notifications"}}
SLACK{{"<b>Slack</b>
notifications"}}
OTHER{{"Other
notifications"}}
NC(["<b>Netdata Cloud</b>
applies silencing
& user settings"])
S1["Standalone
Netdata
1"]
S2["Standalone
Netdata
2"]
SN["Standalone
Netdata
N"]
NC -->|notification| EMAIL & MOBILEAPP & SLACK & OTHER
S1 & S2 & SN -->|alert transition| NC
```
> Note that alerts are still triggered by Netdata agents. Netdata Cloud takes care of the notifications only.
### Configuration steps for standalone Netdata agents with Netdata Cloud
- Install Netdata agents using the commands given by Netdata Cloud, so that they will be automatically added to your Netdata Cloud space. Otherwise, install Netdata agents and then claim them via the command line or their dashboard.
- Optionally: disable their direct dashboard access to secure them.
- Optionally: disable their alert notifications to avoid receiving email notifications directly from them (email notifications are automatically enabled when a working MTA is found on the systems Netdata agents are installed).

View File

@ -24,7 +24,7 @@ Once you understand the process of enabling a connector, you can translate that
## Enable the exporting engine
Use `edit-config` from your
[Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory)
[Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory)
to open `exporting.conf`:
```bash

View File

@ -24,7 +24,7 @@ Read on to learn all the steps and enable unsupervised anomaly detection on your
First make sure Netdata is using Python 3 when it runs Python-based data collectors.
Next, open `netdata.conf` using [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files)
from within the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory). Scroll down to the
from within the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory). Scroll down to the
`[plugin:python.d]` section to pass in the `-ppython3` command option.
```conf

View File

@ -114,7 +114,7 @@ itself while initiating a streaming connection. Copy that into a separate text f
> Find out how to [install `uuidgen`](https://command-not-found.com/uuidgen) on your node if you don't already have it.
Next, open `stream.conf` using [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files)
from within the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
from within the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata

View File

@ -0,0 +1,84 @@
# Netdata Agent
The Netdata Agent is the main building block in a Netdata ecosystem. It is installed on all monitored systems to monitor system components, containers and applications.
The Netdata Agent is an **observability pipeline in a box** that can either operate standalone, or blend into a bigger pipeline made by more Netdata Agents (Children and Parents).
## Distributed Observability Pipeline
The Netdata observability pipeline looks like in the following graph.
The pipeline is extended by creating Metrics Observability Centralization Points that are linked all together (`from a remote Netdata`, `to a remote Netdata`), so that all Netdata installed become a vast integrated observability pipeline.
```mermaid
stateDiagram-v2
classDef userFeature fill:#f00,color:white,font-weight:bold,stroke-width:2px,stroke:yellow
classDef usedByNC fill:#090,color:white,font-weight:bold,stroke-width:2px,stroke:yellow
Local --> Discover
Local: Local Netdata
[*] --> Detect: from a remote Netdata
Others: 3rd party time-series DBs
Detect: Detect Anomalies
Dashboard:::userFeature
Dashboard: Netdata Dashboards
3rdDashboard:::userFeature
3rdDashboard: 3rd party Dashboards
Notifications:::userFeature
Notifications: Alert Notifications
Alerts: Alert Transitions
Discover --> Collect
Collect --> Detect
Store: Store
Store: Time-Series Database
Detect --> Store
Store --> Learn
Store --> Check
Store --> Query
Store --> Score
Store --> Stream
Store --> Export
Query --> Visualize
Score --> Visualize
Check --> Alerts
Learn --> Detect: trained ML models
Alerts --> Notifications
Stream --> [*]: to a remote Netdata
Export --> Others
Others --> 3rdDashboard
Visualize --> Dashboard
Score:::usedByNC
Query:::usedByNC
Alerts:::usedByNC
```
1. **Discover**: auto-detect metric sources on localhost, auto-discover metric sources on Kubernetes.
2. **Collect**: query data sources to collect metric samples, using the optimal protocol for each data source. 800+ integrations supported, including dozens of native application protocols, OpenMetrics and StatsD.
3. **Detect Anomalies**: use the trained machine learning models for each metric, to detect in real-time if each sample collected is an outlier (an anomaly), or not.
4. **Store**: keep collected samples and their anomaly status, in the time-series database (database mode `dbengine`) or a ring buffer (database modes `ram` and `alloc`).
5. **Learn**: train multiple machine learning models for each metric collected, learning behaviors and patterns for detecting anomalies.
6. **Check**: a health engine, triggering alerts and sending notifications. Netdata comes with hundreds of alert configurations that are automatically attached to metrics when they get collected, detecting errors, common configuration errors and performance issues.
7. **Query**: a query engine for querying time-series data.
8. **Score**: a scoring engine for comparing and correlating metrics.
9. **Stream**: a mechanism to connect Netdata agents and build Metrics Centralization Points (Netdata Parents).
10. **Visualize**: Netdata's fully automated dashboards for all metrics.
11. **Export**: export metric samples to 3rd party time-series databases, enabling the use of 3rd party tools for visualization, like Grafana.
## Comparison to other observability solutions
1. **One moving part**: Other monitoring solution require maintaining metrics exporters, time-series databases, visualization engines. Netdata has everything integrated into one package, even when [Metrics Centralization Points](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/metrics-centralization-points/README.md) are required, making deployment and maintenance a lot simpler.
2. **Automation**: Netdata is designed to automate most of the process of setting up and running an observability solution. It is designed to instantly provide comprehensive dashboards and fully automated alerts, with zero configuration.
3. **High Fidelity Monitoring**: Netdata was born from our need to kill the console for observability. So, it provides metrics and logs in the same granularity and fidelity console tools do, but also comes with tools that go beyond metrics and logs, to provide a holistic view of the monitored infrastructure (e.g. check [Top Monitoring](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md)).
4. **Minimal impact on monitored systems and applications**: Netdata has been designed to have a minimal impact on the monitored systems and their applications. There are [independent studies](https://www.ivanomalavolta.com/files/papers/ICSOC_2023.pdf) reporting that Netdata excels in CPU usage, RAM utilization, Execution Time and the impact Netdata has on monitored applications and containers.
5. **Energy efficiency**: [University of Amsterdam did a research to find the energy efficiency of monitoring tools](https://twitter.com/IMalavolta/status/1734208439096676680). They tested Netdata, Prometheus, ELK, among other tools. The study concluded that **Netdata is the most energy efficient monitoring tool**.
## Dashboard Versions
The Netdata agents (Standalone, Children and Parents) **share the dashboard** of Netdata Cloud. However, when the user is logged-in and the Netdata agent is connected to Netdata Cloud, the following are enabled (which are otherwise disabled):
1. **Access to Sensitive Data**: Some data, like systemd-journal logs and several [Top Monitoring](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md) features expose sensitive data, like IPs, ports, process command lines and more. To access all these when the dashboard is served directly from a Netdata agent, Netdata Cloud is required to verify that the user accessing the dashboard has the required permissions.
2. **Dynamic Configuration**: Netdata agents are configured via configuration files, manually or through some provisioning system. The latest Netdata includes a feature to allow users change some of the configuration (collectors, alerts) via the dashboard. This feature is only available to users of paid Netdata Cloud plan.

View File

@ -0,0 +1,43 @@
# Netdata Agent Configuration
The main Netdata agent configuration is `netdata.conf`.
## The Netdata config directory
On most Linux systems, by using our [recommended one-line installation](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#install-on-linux-with-one-line-installer), the **Netdata config
directory** will be `/etc/netdata/`. The config directory contains several configuration files with the `.conf` extension, a
few directories, and a shell script named `edit-config`.
> Some operating systems will use `/opt/netdata/etc/netdata/` as the config directory. If you're not sure where yours
> is, navigate to `http://NODE:19999/netdata.conf` in your browser, replacing `NODE` with the IP address or hostname of
> your node, and find the `# config directory = ` setting. The value listed is the config directory for your system.
All of Netdata's documentation assumes that your config directory is at `/etc/netdata`, and that you're running any scripts from inside that directory.
## edit `netdata.conf`
To edit `netdata.conf`, run this on your terminal:
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config netdata.conf
```
Your editor will open.
## downloading `netdata.conf`
The running version of `netdata.conf` can be downloaded from a running Netdata agent, at this URL:
```
http://agent-ip:19999/netdata.conf
```
You can save and use this version, using these commands:
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
curl -ksSLo /tmp/netdata.conf.new http://localhost:19999/netdata.conf && sudo mv -i /tmp/netdata.conf.new netdata.conf
```

View File

@ -0,0 +1,87 @@
# Sizing Netdata Agents
Netdata automatically adjusts its resources utilization based on the workload offered to it.
This is a map of how Netdata **features impact resources utilization**:
| Feature | CPU | RAM | Disk I/O | Disk Space | Retention | Bandwidth |
|-----------------------------:|:---:|:---:|:--------:|:----------:|:---------:|:---------:|
| Metrics collected | X | X | X | X | X | - |
| Samples collection frequency | X | - | X | X | X | - |
| Database mode and tiers | - | X | X | X | X | - |
| Machine learning | X | X | - | - | - | - |
| Streaming | X | X | - | - | - | X |
1. **Metrics collected**: The number of metrics collected affects almost every aspect of resources utilization.
When you need to lower the resources used by Netdata, this is an obvious first step.
2. **Samples collection frequency**: By default Netdata collects metrics with 1-second granularity, unless the metrics collected are not updated that frequently, in which case Netdata collects them at the frequency they are updated. This is controlled per data collection job.
Lowering the data collection frequency from every-second to every-2-seconds, will make Netdata use half the CPU utilization. So, CPU utilization is proportional to the data collection frequency.
3. **Database Mode and Tiers**: By default Netdata stores metrics in 3 database tiers: high-resolution, mid-resolution, low-resolution. All database tiers are updated in parallel during data collection, and depending on the query duration Netdata may consult one or more tiers to optimize the resources required to satisfy it.
The number of database tiers affects the memory requirements of Netdata. Going from 3-tiers to 1-tier, will make Netdata use half the memory. Of course metrics retention will also be limited to 1 tier.
4. **Machine Learning**: Byt default Netdata trains multiple machine learning models for every metric collected, to learn its behavior and detect anomalies. Machine Learning is a CPU intensive process and affects the overall CPU utilization of Netdata.
5. **Streaming Compression**: When using Netdata in Parent-Child configurations to create Metrics Centralization Points, the compression algorithm used greatly affects CPU utilization and bandwidth consumption.
Netdata supports multiple streaming compressions algorithms, allowing the optimization of either CPU utilization or Network Bandwidth. The default algorithm `zstd` provides the best balance among them.
## Minimizing the resources used by Netdata Agents
To minimize the resources used by Netdata Agents, we suggest to configure Netdata Parents for centralizing metric samples, and disabling most of the features on Netdata Children. This will provide minimal resources utilization at the edge, while all the features of Netdata are available at the Netdata Parents.
The following guides provide instructions on how to do this.
## Maximizing the scale of Netdata Parents
Netdata Parents automatically size resource utilization based on the workload they receive. The only possible option for improving query performance is to dedicate more RAM to them, by increasing their caches efficiency.
Check [RAM Requirements](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/sizing-netdata-agents/ram-requirements.md) for more information.
## Innovations Netdata has for optimal performance and scalability
The following are some of the innovations the open-source Netdata agent has, that contribute to its excellent performance, and scalability.
1. **Minimal disk I/O**
When Netdata saves data on-disk, it stores them at their final place, eliminating the need to reorganize this data.
Netdata is organizing its data structures in such a way that samples are committed to disk as evenly as possible across time, without affecting its memory requirements.
Furthermore, Netdata Agents use direct-I/O for saving and loading metric samples. This prevents Netdata from polluting system caches with metric data. Netdata maintains its own caches for this data.
All these features make Netdata an nice partner and a polite citizen for production applications running on the same systems Netdata runs.
2. **4 bytes per sample uncompressed**
To achieve optimal memory and disk footprint, Netdata uses a custom 32-bit floating point number we have developed. This floating point number is used to store the samples collected, together with their anomaly bit. The database of Netdata is fixed-step, so it has predefined slots for every sample, allowing Netdata to store timestamps once every several hundreds samples, minimizing both its memory requirements and the disk footprint.
3. **Query priorities**
Alerting, Machine Learning, Streaming and Replication, rely on metric queries. When multiple queries are running in parallel, Netdata assigns priorities to all of them, favoring interactive queries over background tasks. This means that queries do not compete equally for resources. Machine learning or replication may slow down when interactive queries are running and the system starves for resources.
4. **A pointer per label**
Apart from metric samples, metric labels and their cardinality is the biggest memory consumer, especially in highly ephemeral environments, like kubernetes. Netdata uses a single pointer for any label key-value pair that is reused. Keys and values are also deduplicated, providing the best possible memory footprint for metric labels.
5. **Streaming Protocol**
The streaming protocol of Netdata allows minimizing the resources consumed on production systems by delegating features of to other Netdata agents (Parents), without compromising monitoring fidelity or responsiveness, enabling the creation of a highly distributed observability platform.
## Netdata vs Prometheus
Netdata outperforms Prometheus in every aspect. -35% CPU Utilization, -49% RAM usage, -12% network bandwidth, -98% disk I/O, -75% in disk footprint for high resolution data, while providing more than a year of retention.
Read the [full comparison here](https://blog.netdata.cloud/netdata-vs-prometheus-performance-analysis/).
## Energy Efficiency
University of Amsterdam contacted a research on the impact monitoring systems have on docker based systems.
The study found that Netdata excels in CPU utilization, RAM usage, Execution Time and concluded that **Netdata is the most energy efficient tool**.
Read the [full study here](https://www.ivanomalavolta.com/files/papers/ICSOC_2023.pdf).

View File

@ -0,0 +1,47 @@
# Bandwidth Requirements
## On Production Systems, Standalone Netdata
Standalone Netdata may use network bandwidth under the following conditions:
1. You configured data collection jobs that are fetching data from remote systems. There is no such jobs enabled by default.
2. You use the dashboard of the Netdata.
3. [Netdata Cloud communication](#netdata-cloud-communication) (see below).
## On Metrics Centralization Points, between Netdata Children & Parents
Netdata supports multiple compression algorithms for streaming communication. Netdata Children offer all their compression algorithms when connecting to a Netdata Parent, and the Netdata Parent decides which one to use based on algorithms availability and user configuration.
| Algorithm | Best for |
|:---------:|:-----------------------------------------------------------------------------------------------------------------------------------:|
| `zstd` | The best balance between CPU utilization and compression efficiency. This is the default. |
| `lz4` | The fastest of the algorithms. Use this when CPU utilization is more important than bandwidth. |
| `gzip` | The best compression efficiency, at the expense of CPU utilization. Use this when bandwidth is more important than CPU utilization. |
| `brotli` | The most CPU intensive algorithm, providing the best compression. |
The expected bandwidth consumption using `zstd` for 1 million samples per second is 84 Mbps, or 10.5 MiB/s.
The order compression algorithms is selected is configured in `stream.conf`, per `[API KEY]`, like this:
```
compression algorithms order = zstd lz4 brotli gzip
```
The first available algorithm on both the Netdata Child and the Netdata Parent, from left to right, is chosen.
Compression can also be disabled in `stream.conf` at either Netdata Children or Netdata Parents.
## Netdata Cloud Communication
When Netdata Agents connect to Netdata Cloud, they communicate metadata of the metrics being collected, but they do not stream the samples collected for each metric.
The information transferred to Netdata Cloud is:
1. Information and **metadata about the system itself**, like its hostname, architecture, virtualization technologies used and generally labels associated with the system.
2. Information about the **running data collection plugins, modules and jobs**.
3. Information about the **metrics available and their retention**.
4. Information about the **configured alerts and their transitions**.
This is not a constant stream of information. Netdata Agents update Netdata Cloud only about status changes on all the above (e.g. an alert being triggered, or a metric stopped being collected). So, there is an initial handshake and exchange of information when Netdata starts, and then there only updates when required.
Of course, when you view Netdata Cloud dashboards that need to query the database a Netdata agent maintains, this query is forwarded to an agent that can satisfy it. This means that Netdata Cloud receives metric samples only when a user is accessing a dashboard and the samples transferred are usually aggregations to allow rendering the dashboards.

View File

@ -0,0 +1,65 @@
# CPU Requirements
Netdata's CPU consumption is affected by the following factors:
1. The number of metrics collected
2. The frequency metrics are collected
3. Machine Learning
4. Streaming compression (streaming of metrics to Netdata Parents)
5. Database Mode
## On Production Systems, Netdata Children
On production systems, where Netdata is running with default settings, monitoring the system it is installed at and its containers and applications, CPU utilization should usually be about 1% to 5% of a single CPU core.
This includes 3 database tiers, machine learning, per-second data collection, alerts, and streaming to a Netdata Parent.
## On Metrics Centralization Points, Netdata Parents
On Metrics Centralization Points, Netdata Parents running on modern server hardware, we **estimate CPU utilization per million of samples collected per second**:
| Feature | Depends On | Expected Utilization | Key Reasons |
|:-----------------:|:---------------------------------------------------:|:----------------------------------------------------------------:|:-------------------------------------------------------------------------:|
| Metrics Ingestion | Number of samples received per second | 2 CPU cores per million of samples per second | Decompress and decode received messages, update database. |
| Metrics re-streaming| Number of samples resent per second | 2 CPU cores per million of samples per second | Encode and compress messages towards Netdata Parent. |
| Machine Learning | Number of unique time-series concurrently collected | 2 CPU cores per million of unique metrics concurrently collected | Train machine learning models, query existing models to detect anomalies. |
We recommend keeping the total CPU utilization below 60% when a Netdata Parent is steadily ingesting metrics, training machine learning models and running health checks. This will leave enough CPU resources available for queries.
## I want to minimize CPU utilization. What should I do?
You can control Netdata's CPU utilization with these parameters:
1. **Data collection frequency**: Going from per-second metrics to every-2-seconds metrics will half the CPU utilization of Netdata.
2. **Number of metrics collected**: Netdata by default collects every metric available on the systems it runs. Review the metrics collected and disable data collection plugins and modules not needed.
3. **Machine Learning**: Disable machine learning to save CPU cycles.
4. **Number of database tiers**: Netdata updates database tiers in parallel, during data collection. This affects both CPU utilization and memory requirements.
5. **Database Mode**: The default database mode is `dbengine`, which compresses and commits data to disk. If you have a Netdata Parent where metrics are aggregated and saved to disk and there is a reliable connection between the Netdata you want to optimize and its Parent, switch to database mode `ram` or `alloc`. This disables saving to disk, so your Netdata will also not use any disk I/O.
## I see increased CPU consumption when a busy Netdata Parent starts, why?
When a Netdata Parent starts and Netdata children get connected to it, there are several operations that temporarily affect CPU utilization, network bandwidth and disk I/O.
The general flow looks like this:
1. **Back-filling of higher tiers**: Usually this means calculating the aggregates of the last hour of `tier2` and of the last minute of `tier1`, ensuring that higher tiers reflect all the information `tier0` has. If Netdata was stopped abnormally (e.g. due to a system failure or crash), higher tiers may have to be back-filled for longer durations.
2. **Metadata synchronization**: The metadata of all metrics each Netdata Child maintains are negotiated between the Child and the Parent and are synchronized.
3. **Replication**: If the Parent is missing samples the Child has, these samples are transferred to the Parent before transferring new samples.
4. Once all these finish, the normal **streaming of new metric samples** starts.
5. At the same time, **machine learning** initializes, loads saved trained models and prepares anomaly detection.
6. After a few moments the **health engine starts checking metrics** for triggering alerts.
The above process is per metric. So, while one metric back-fills, another replicates and a third one streams.
At the same time:
- the compression algorithm learns the patterns of the data exchanged and optimizes its dictionaries for optimal compression and CPU utilization,
- the database engine adjusts the page size of each metric, so that samples are committed to disk as evenly as possible across time.
So, when looking for the "steady CPU consumption during ingestion" of a busy Netdata Parent, we recommend to let it stabilize for a few hours before checking.
Keep in mind that Netdata has been designed so that even if during the initialization phase and the connection of hundreds of Netdata Children the system lacks CPU resources, the Netdata Parent will complete all the operations and eventually enter a steady CPU consumption during ingestion, without affecting the quality of the metrics stored. So, it is ok if during initialization of a busy Netdata Parent, CPU consumption spikes to 100%.
Important: the above initialization process is not such intense when new nodes get connected to a Netdata Parent for the first time (e.g. ephemeral nodes), since several of the steps involved are not required.
Especially for the cases where children disconnect and reconnect to the Parent due to network related issues (i.e. both the Netdata Child and the Netdata Parent have not been restarted and less than 1 hour has passed since the last disconnection), the re-negotiation phase is minimal and metrics are instantly entering the normal streaming phase.

View File

@ -0,0 +1,131 @@
# Disk Requirements &amp; Retention
## Database Modes and Tiers
Netdata comes with 3 database modes:
1. `dbengine`: the default high-performance multi-tier database of Netdata. Metric samples is cached in memory and saved to disk in multiple tiers, with compression.
2. `ram`: metric samples are stored in ring buffers in memory, with increments of 1024 samples. Metric samples are not committed to disk. Kernel-Same-Page (KSM) can be used to deduplicate Netdata's memory.
3. `alloc`: metric samples are stored in ring buffers in memory, with flexible increments. Metric samples are not committed to disk.
## `ram` and `alloc`
Modes `ram` and `alloc` can help when Netdata should not introduce any disk I/O at all. In both of these modes, metric samples exist only in memory, and only while they are collected.
When Netdata is configured to stream its metrics to a Metrics Observability Centralization Point (a Netdata Parent), metric samples are forwarded in real-time to that Netdata Parent. The ring buffers available in these modes is used to cache the collected samples for some time, in case there are network issues, or the Netdata Parent is restarted for maintenance.
The memory required per sample in these modes, is 4 bytes:
- `ram` mode uses `mmap()` behind the scene, and can be incremented in steps of 1024 samples (4KiB). Mode `ram` allows the use of the Linux kernel memory dedupper (Kernel-Same-Page or KSM) to deduplicate Netdata ring buffers and save memory.
- `alloc` mode can be sized for any number of samples per metric. KSM cannot be used in this mode.
To configure database mode `ram` or `alloc`, in `netdata.conf`, set the following:
- `[db].mode` to either `ram` or `alloc`.
- `[db].retention` to the number of samples the ring buffers should maintain. For `ram` if the value set is not a multiple of 1024, the next multiple of 1024 will be used.
## `dbengine`
`dbengine` supports up to 5 tiers. By default, 3 tiers are used, like this:
| Tier | Resolution | Uncompressed Sample Size |
|:--------:|:--------------------------------------------------------------------------------------------:|:------------------------:|
| `tier0` | native resolution (metrics collected per-second as stored per-second) | 4 bytes |
| `tier1` | 60 iterations of `tier0`, so when metrics are collected per-second, this tier is per-minute. | 16 bytes |
| `tier2` | 60 iterations of `tier1`, so when metrics are collected per second, this tier is per-hour. | 16 bytes |
Data are saved to disk compressed, so the actual size on disk varies depending on compression efficiency.
`dbegnine` tiers are overlapping, so higher tiers include a down-sampled version of the samples in lower tiers:
```mermaid
gantt
dateFormat YYYY-MM-DD
tickInterval 1week
axisFormat
todayMarker off
tier0, 14d :a1, 2023-12-24, 7d
tier1, 60d :a2, 2023-12-01, 30d
tier2, 365d :a3, 2023-11-02, 59d
```
## Disk Space and Metrics Retention
You can find information about the current disk utilization of a Netdata Parent, at <http://agent-ip:19999/api/v2/info>. The output of this endpoint is like this:
```json
{
// more information about the agent
// near the end:
"db_size": [
{
"tier": 0,
"disk_used": 1677528462156,
"disk_max": 1677721600000,
"disk_percent": 99.9884881,
"from": 1706201952,
"to": 1707401946,
"retention": 1199994,
"expected_retention": 1200132,
"currently_collected_metrics": 2198777
},
{
"tier": 1,
"disk_used": 838123468064,
"disk_max": 838860800000,
"disk_percent": 99.9121032,
"from": 1702885800,
"to": 1707401946,
"retention": 4516146,
"expected_retention": 4520119,
"currently_collected_metrics": 2198777
},
{
"tier": 2,
"disk_used": 334329683032,
"disk_max": 419430400000,
"disk_percent": 79.710408,
"from": 1679670000,
"to": 1707401946,
"retention": 27731946,
"expected_retention": 34790871,
"currently_collected_metrics": 2198777
}
]
}
```
In this example:
- `tier` is the database tier.
- `disk_used` is the currently used disk space in bytes.
- `disk_max` is the configured max disk space in bytes.
- `disk_percent` is the current disk space utilization for this tier.
- `from` is the first (oldest) timestamp in the database for this tier.
- `to` is the latest (newest) timestamp in the database for this tier.
- `retention` is the current retention of the database for this tier, in seconds (divide by 3600 for hours, divide by 86400 for days).
- `expected_retention` is the expected retention in seconds when `disk_percent` will be 100 (divide by 3600 for hours, divide by 86400 for days).
- `currently_collected_metrics` is the number of unique time-series currently being collected for this tier.
The estimated number of samples on each tier can be calculated as follows:
```
estimasted number of samples = retention / sample duration * currently_collected_metrics
```
So, for our example above:
| Tier | Sample Duration (seconds) | Estimated Number of Samples | Disk Space Used | Current Retention (days) | Expected Retention (days) | Bytes Per Sample |
|:-------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------:|:-------------------------:|:----------------:|
| `tier0` | 1 | 2.64 trillion samples | 1.56 TiB | 13.8 | 13.9 | 0.64 |
| `tier1` | 60 | 165.5 billion samples | 780 GiB | 52.2 | 52.3 | 5.01 |
| `tier2` | 3600 | 16.9 billion samples | 311 GiB | 320.9 | 402.7 | 19.73 |
Note: as you can see in this example, the disk footprint per sample of `tier2` is bigger than the uncompressed sample size (19.73 bytes vs 16 bytes). This is due to the fact that samples are organized into pages and pages into extents. When Netdata is restarted frequently, it saves all data prematurely, before filling up entire pages and extents, leading to increased overheads per sample.
To configure retention, in `netdata.conf`, set the following:
- `[db].mode` to `dbengine`.
- `[db].dbengine multihost disk space MB`, this is the max disk size for `tier0`. The default is 256MiB.
- `[db].dbengine tier 1 multihost disk space MB`, this is the max disk space for `tier1`. The default is 50% of `tier0`.
- `[db].dbengine tier 2 multihost disk space MB`, this is the max disk space for `tier2`. The default is 50% of `tier1`.

View File

@ -0,0 +1,60 @@
# RAM Requirements
With default configuration about database tiers, Netdata should need about 16KiB per unique metric collected, independently of the data collection frequency.
Netdata supports memory ballooning and automatically sizes and limits the memory used, based on the metrics concurrently being collected.
## On Production Systems, Netdata Children
With default settings, Netdata should run with 100MB to 200MB of RAM, depending on the number of metrics being collected.
This number can be lowered by limiting the number of database tier or switching database modes. For more information check [Disk Requirements and Retention](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/sizing-netdata-agents/disk-requirements-and-retention.md).
## On Metrics Centralization Points, Netdata Parents
The general formula, with the default configuration of database tiers, is:
```
memory = UNIQUE_METRICS x 16KiB + CONFIGURED_CACHES
```
The default `CONFIGURED_CACHES` is 32MiB.
For 1 million concurrently collected time-series (independently of their data collection frequency), the memory required is:
```
UNIQUE_METRICS = 1000000
CONFIGURED_CACHES = 32MiB
(UNIQUE_METRICS * 16KiB / 1024 in MiB) + CONFIGURED_CACHES =
( 1000000 * 16KiB / 1024 in MiB) + 32 MiB =
15657 MiB =
about 16 GiB
```
There are 2 cache sizes that can be configured in `netdata.conf`:
1. `[db].dbengine page cache size MB`: this is the main cache that keeps metrics data into memory. When data are not found in it, the extent cache is consulted, and if not found in that either, they are loaded from disk.
2. `[db].dbengine extent cache size MB`: this is the compressed extent cache. It keeps in memory compressed data blocks, as they appear on disk, to avoid reading them again. Data found in the extend cache but not in the main cache have to be uncompressed to be queried.
Both of them are dynamically adjusted to use some of the total memory computed above. The configuration in `netdata.conf` allows providing additional memory to them, increasing their caching efficiency.
## I have a Netdata Parent that is also a systemd-journal logs centralization point, what should I know?
Logs usually require significantly more disk space and I/O bandwidth than metrics. For optimal performance we recommend to store metrics and logs on separate, independent disks.
Netdata uses direct-I/O for its database, so that it does not pollute the system caches with its own data. We want Netdata to be a nice citizen when it runs side-by-side with production applications, so this was required to guarantee that Netdata does not affect the operation of databases or other sensitive applications running on the same servers.
To optimize disk I/O, Netdata maintains its own private caches. The default settings of these caches are automatically adjusted to the minimum required size for acceptable metrics query performance.
`systemd-journal` on the other hand, relies on operating system caches for improving the query performance of logs. When the system lacks free memory, querying logs leads to increased disk I/O.
If you are experiencing slow responses and increased disk reads when metrics queries run, we suggest to dedicate some more RAM to Netdata.
We frequently see that the following strategy gives best results:
1. Start the Netdata Parent, send all the load you expect it to have and let it stabilize for a few hours. Netdata will now use the minimum memory it believes is required for smooth operation.
2. Check the available system memory.
3. Set the page cache in `netdata.conf` to use 1/3 of the available memory.
This will allow Netdata queries to have more caches, while leaving plenty of available memory of logs and the operating system.

View File

@ -0,0 +1,70 @@
# Netdata Agent Versions & Platforms
Netdata is evolving rapidly and new features are added at a constant pace. Therefore we have frequent release cadence to deliver all these features to use as soon as possible.
Netdata Agents are available in 2 versions:
| Release Channel | Release Frequency | Support Policy & Features | Support Duration | Backwards Compatibility |
|:---------------:|:---------------------------------------------:|:---------------------------------------------------------:|:----------------------------------------:|:---------------------------------------------------------------------------------:|
| Stable | At most once per month, usually every 45 days | Receiving bug fixes and security updates between releases | Up to the 2nd stable release after them | Previous configuration semantics and data are supported by newer releases |
| Nightly | Every night at 00:00 UTC | Latest pre-released features | Up to the 2nd nightly release after them | Configuration and data of unreleased features may change between nightly releases |
> "Support Duration" defines the time we consider the release as actively used by users in production systems, so that all features of Netdata should be working like the day they were released. However, after the latest release, previous releases stop receiving bug fixes and security updates. All users are advised to update to the latest release to get the latest bug fixes.
## Binary Distribution Packages
Binary distribution packages are provided by Netdata, via CI integration, for the following platforms and architectures:
| Platform | Platform Versions | Released Packages Architecture | Format |
|:-----------------------:|:--------------------------------:|:------------------------------------------------:|:------------:|
| Docker under Linux | 19.03 and later | `x86_64`, `i386`, `ARMv7`, `AArch64`, `POWER8+` | docker image |
| Static Builds | - | `x86_64`, `ARMv6`, `ARMv7`, `AArch64`, `POWER8+` | .gz.run |
| Alma Linux | 8.x, 9.x | `x86_64`, `AArch64` | RPM |
| Amazon Linux | 2, 2023 | `x86_64`, `AArch64` | RPM |
| Centos | 7.x | `x86_64` | RPM |
| Debian | 10.x, 11.x, 12.x | `x86_64`, `i386`, `ARMv7`, `AArch64` | DEB |
| Fedora | 37, 38, 39 | `x86_64`, `AArch64` | RPM |
| OpenSUSE | Leap 15.4, Leap 15.5, Tumbleweed | `x86_64`, `AArch64` | RPM |
| Oracle Linux | 8.x, 9.x | `x86_64`, `AArch64` | RPM |
| Redhat Enterprise Linux | 7.x | `x86_64` | RPM |
| Redhat Enterprise Linux | 8.x, 9.x | `x86_64`, `AArch64` | RPM |
| Ubuntu | 20.04, 22.04, 23.10 | `x86_64`, `i386`, `ARMv7` | DEB |
> IMPORTANT: Linux distributions frequently provide binary packages of Netdata. However, the packages you will find at the distributions' repositories may be outdated, incomplete, missing significant features or completely broken. We recommend to use the packages we provide.
## Third party Supported Binary Packages
The following distributions always provide the latest stable version of Netdata:
| Platform | Platform Versions | Released Packages Architecture |
|:----------:|:-----------------:|:------------------------------------:|
| Arch Linux | Latest | All the Arch supported architectures |
| MacOS Brew | Latest | All the Brew supported architectures |
## Builds from Source
We guarantee Netdata builds from source for the platforms we provide automated binary packages. These platforms are automatically checked via our CI, and fixes are always applied to allow merging new code into the nightly versions.
The following builds from source should usually work, although we don't regularly monitor if there are issues:
| Platform | Platform Versions |
|:-----------------------------------:|:--------------------------:|
| Linux Distributions | Latest unreleased versions |
| FreeBSD and derivatives | 13-STABLE |
| Gentoo and derivatives | Latest |
| Arch Linux and derivatives | latest from AUR |
| MacOS | 11, 12, 13 |
| Linux under Microsoft Windows (WSL) | Latest |
## Static Builds and Unsupported Linux Versions
The static builds of Netdata can be used on any Linux platform of the supported architectures. The only requirement these static builds have is a working Linux kernel, any version. Everything else required for Netdata to run, is inside the package itself.
Static builds usually miss certain features that require operating-system support and cannot be provided in a generic way. These features include:
- IPMI hardware sensors support
- systemd-journal features
- eBPF related features
When platforms are removed from the [Binary Distribution Packages](https://github.com/netdata/netdata/blob/master/packaging/makeself/README.md) list, they default to install or update Netdata to a static build. This may mean that after platforms become EOL, Netdata on them may lose some of its features. We recommend to upgrade the operating system before it becomes EOL, to continue using all the features of Netdata.

View File

@ -0,0 +1,134 @@
# Netdata Cloud
Netdata Cloud is a service that complements Netdata installations. It is a key component in achieving optimal cost structure for large scale observability.
Technically, Netdata Cloud is a thin control plane that allows the Netdata ecosystem to be a virtually unlimited scalable and flexible observability pipeline. With Netdata Cloud, this observability pipeline can span multiple teams, cloud providers, data centers and services, while remaining a uniform and highly integrated infrastructure, providing real-time and high-fidelity insights.
```mermaid
flowchart TB
NC("<b>☁️ Netdata Cloud</b>
access from anywhere,
horizontal scalability,
role based access,
custom dashboards,
central notifications")
Users[["<b>✨ Unified Dashboards</b>
across the infrastructure,
multi-cloud, hybrid-cloud"]]
Notifications["<b>🔔 Alert Notifications</b>
Slack, e-mail, Mobile App,
PagerDuty, and more"]
Users <--> NC
NC -->|deduplicated| Notifications
subgraph On-Prem Infrastructure
direction TB
Agents("<b>🌎 Netdata Agents</b>
Standalone,
Children, Parents
(possibly overlapping)")
TimeSeries[("<b>Time-Series</b>
metric samples
database")]
PrivateAgents("<b>🔒 Private
Netdata Agents</b>")
Agents <--> TimeSeries
Agents ---|stream| PrivateAgents
end
NC <-->|secure connection| Agents
```
Netdata Cloud provides the following features, on top of what the Netdata agents already provide:
1. **Horizontal scalability**: Netdata Cloud allows scaling the observability infrastructure horizontally, by adding more independent Netdata Parents and Children. It can aggregate such, otherwise independent, observability islands into one uniform and integrated infrastructure.
Netdata Cloud is a fundamental component for achieving an optimal cost structure and flexibility, in structuring observability the way that is best suited for each case.
2. **Role Based Access Control (RBAC)**: Netdata Cloud has all the mechanisms for user-management and access control. It allows assigning all users a role, segmenting the infrastructure into rooms, and associating rooms with roles and users.
3. **Access from anywhere**: Netdata agents are installed on-prem and this is where all your data are always stored. Netdata Cloud allows querying all the Netdata agents (Standalone, Children and Parents) in real-time when dashboards are accessed via Netdata Cloud.
This enables a much simpler access control, eliminating the complexities of setting up VPNs to access observability, and the bandwidth costs for centralizing all metrics to one place.
4. **Central dispatch of alert notifications**: Netdata Cloud allows controlling the dispatch of alert notifications centrally. By default, all Netdata agents (Standalone, Children and Parents) send their own notifications. This becomes increasingly complex as the infrastructure grows. So, Netdata Cloud steps in to simplify this process and provide central control of all notifications.
Netdata Cloud also enables the use of the **Netdata Mobile App** offering mobile push notifications for all users in commercial plans.
5. **Custom Dashboards**: Netdata Cloud enables the creation, storage and sharing custom dashboards.
Custom dashboards are created directly from the UI, without the need for learning a query language. Netdata Cloud provides all the APIs to the Netdata dashboards to store, browse and retrieve custom dashboards created by all users.
6. **Advanced Customization**: Netdata Cloud provides all the APIs for the dashboard to have different default settings per space, per room and per user, allowing administrators and users to customize the Netdata dashboards and charts the way they see fit.
## Data Exposed to Netdata Cloud
Netdata is thin layer of top of Netdata agents. It does not receive the samples collected, or the logs Netdata agents maintain.
This is a key design decision for Netdata. If we were centralizing metric samples and logs, Netdata would have the same constrains and cost structure other observability solutions have, and we would be forced to lower metrics resolution, filter out metrics and eventually increase significantly the cost of observability.
Instead, Netdata Cloud receives and stores only metadata related to the metrics collected, such as the nodes collecting metrics and their labels, the metric names, their labels and their retention, the data collection plugins and modules running, the configured alerts and their transitions.
This information is a small fraction of the total information maintained by Netdata agents, allowing Netdata Cloud to remain high-resolution, high-fidelity and real-time, while being able to:
- dispatch alerts centrally for all alert transitions.
- know which Netdata agents to query when users view the dashboards.
Metric samples and logs are transferred via Netdata Cloud to your Web Browser, only when you view them via Netdata Cloud. And even then, Netdata Cloud does not store this information. It only aggregates the responses of multiple Netdata agents to a single response for your web browser to visualize.
## High-Availability
You can subscribe to Netdata Cloud updates at the [Netdata Cloud Status](https://status.netdata.cloud/) page.
Netdata Cloud is a highly available, auto-scalable solution, however being a monitoring solution, we need to ensure dashboards are accessible during crisis.
Netdata agents provide the same dashboard Netdata Cloud provides, with the following limitations:
1. Netdata agents (Children and Parents) dashboards are limited to their databases, while on Netdata Cloud the dashboard presents the entire infrastructure, from all Netdata agents connected to it.
2. When you are not logged-in or the agent is not connected to Netdata Cloud, certain features of the Netdata agent dashboard will not be available.
When you are logged-in and the agent is connected to Netdata Cloud, the agent dashboard has the same functionality as Netdata Cloud.
To ensure dashboard high availability, Netdata agent dashboards are available by directly accessing them, even when the connectivity between Children and Parents or Netdata Cloud faces issues. This allows the use of the individual Netdata agents' dashboards during crisis, at different levels of aggregation.
## Fidelity and Insights
Netdata Cloud queries Netdata agents, so it provides exactly the same fidelity and insights Netdata agents provide. Dashboards have the same resolution, the same number of metrics, exactly the same data.
## Performance
The Netdata agent and Netdata Cloud have similar query performance, but there are additional network latencies involved when the dashboards are viewed via Netdata Cloud.
Accessing Netdata agents on the same LAN has marginal network latency and their response time is only affected by the queries. However, accessing the same Netdata agents via Netdata Cloud has a bigger network round-trip time, that looks like this:
1. Your web browser makes a request to Netdata Cloud.
2. Netdata Cloud sends the request to your Netdata agents. If multiple Netdata agents are involved, they are queried in parallel.
3. Netdata Cloud receives their responses and aggregates them into a single response.
4. Netdata Cloud replies to your web browser.
If you are sitting on the same LAN as the Netdata agents, the latency will be 2 times the round-trip network latency between this LAN and Netdata Cloud.
However, when there are multiple Netdata agents involved, the queries will be faster compared to a monitoring solution that has one centralization point. Netdata Cloud splits each query into multiple parts and each of the Netdata agents involved will only perform a small part of the original query. So, when querying a large infrastructure, you enjoy the performance of the combined power of all your Netdata agents, which is usually quite higher than any single-centralization-point monitoring solution.
## Does Netdata Cloud require Observability Centralization Points?
No. Any or all Netdata agents can be connected to Netdata Cloud.
We recommend to create [observability centralization points](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/README.md), as required for operational efficiency (ephemeral nodes, teams or services isolation, central control of alerts, production systems performance), security policies (internet isolation), or cost optimization (use existing capacities before allocating new ones).
We suggest to review the [Best Practices for Observability Centralization Points](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/best-practices.md).
## When I have Netdata Parents, do I need to connect Netdata Children to Netdata Cloud too?
No, it is not needed, but it provides high-availability.
When Netdata Parents are connected to Netdata Cloud, all their Netdata Children are available, via these Parents.
When multiple Netdata Parents maintain a database for the same Netdata Children (e.g. clustered Parents, or Parents and Grandparents), Netdata Cloud is able to detect the unique nodes in an infrastructure and query each node only once, using one of the available Parents.
Netdata Cloud prefers:
- The most distant (from the Child) Parent available, when doing metrics visualization queries (since usually these Parents have been added for this purpose).
- The closest (to the Child) Parent available, for [Top Monitoring](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md) (since top-monitoring provides live data, like the processes running, the list of sockets open, etc). The streaming protocol of Netdata Parents and Children is able to forward such requests to the right child, via the Parents, to respond with live and accurate data.
Netdata Children may be connected to Netdata Cloud for high-availability, in case the Netdata Parents are unreachable.

View File

@ -0,0 +1,77 @@
# Netdata Cloud On-Prem
Netdata Cloud is built as microservices and is orchestrated by a Kubernetes cluster, providing a highly available and auto-scaled observability platform.
The overall architecture looks like this:
```mermaid
flowchart TD
agents("🌍 <b>Netdata Agents</b><br/>Users' infrastructure<br/>Netdata Children & Parents")
users[["🔥 <b>Unified Dashboards</b><br/>Integrated Infrastructure<br/>Dashboards"]]
ingress("🛡️ <b>Ingress Gateway</b><br/>TLS termination")
traefik((("🔒 <b>Traefik</b><br/>Authentication &<br/>Authorization")))
emqx(("📤 <b>EMQX</b><br/>Agents Communication<br/>Message Bus<br/>MQTT"))
pulsar(("⚡ <b>Pulsar</b><br/>Internal Microservices<br/>Message Bus"))
frontend("🌐 <b>Front-End</b><br/>Static Web Files")
auth("👨‍💼 <b>Users &amp; Agents</b><br/>Authorization<br/>Microservices")
spaceroom("🏡 <b>Spaces, Rooms,<br/>Nodes, Settings</b><br/>Microservices for<br/>managing Spaces,<br/>Rooms, Nodes and<br/>related settings")
charts("📈 <b>Metrics & Queries</b><br/>Microservices for<br/>dispatching queries<br/>to Netdata agents")
alerts("🔔 <b>Alerts & Notifications</b><br/>Microservices for<br/>tracking alert<br/>transitions and<br/>deduplicating alerts")
sql[("✨ <b>PostgreSQL</b><br/>Users, Spaces, Rooms,<br/>Agents, Nodes, Metric<br/>Names, Metrics Retention,<br/>Custom Dashboards,<br/>Settings")]
redis[("🗒️ <b>Redis</b><br/>Caches needed<br/>by Microservices")]
elk[("🗞️ <b>Elasticsearch</b><br/>Feed Events Database")]
bridges("🤝 <b>Input & Output</b><br/>Microservices bridging<br/>agents to internal<br/>components")
notifications("📢 <b>Notifications Integrations</b><br/>Dispatch alert<br/>notifications to<br/>3rd party services")
feed("📝 <b>Feed & Events</b><br/>Microservices for<br/>managing the events feed")
users --> ingress
agents --> ingress
ingress --> traefik
traefik ==>|agents<br/>websockets| emqx
traefik -.- auth
traefik ==>|http| spaceroom
traefik ==>|http| frontend
traefik ==>|http| charts
traefik ==>|http| alerts
spaceroom o-...-o pulsar
spaceroom -.- redis
spaceroom x-..-x sql
spaceroom -.-> feed
charts o-.-o pulsar
charts -.- redis
charts x-.-x sql
charts -..-> feed
alerts o-.-o pulsar
alerts -.- redis
alerts x-.-x sql
alerts -..-> feed
auth o-.-o pulsar
auth -.- redis
auth x-.-x sql
auth -.-> feed
feed <--> elk
alerts ----> notifications
%% auth ~~~ spaceroom
emqx <.-> bridges o-..-o pulsar
```
## Requirements
The following components are required to run Netdata Cloud On-Prem:
- **Kubernetes cluster** version 1.23+
- **Kubernetes metrics server** (for autoscaling)
- **TLS certificate** for secure connections. A single endpoint is required but there is an option to split the frontend, api, and MQTT endpoints. The certificate must be trusted by all entities connecting to it.
- Default **storage class configured and working** (persistent volumes based on SSDs are preferred)
The following 3rd party components are used, which can be pulled with the `netdata-cloud-dependency` package we provide:
- **Ingress controller** supporting HTTPS
- **PostgreSQL** version 13.7 (main database for all metadata Netdata Cloud maintains)
- **EMQX** version 5.11 (MQTT Broker that allows Agents to send messages to the On-Prem Cloud)
- **Apache Pulsar** version 2.10+ (message broken for inter-container communication)
- **Traefik** version 2.7.x (internal API Gateway)
- **Elasticsearch** version 8.8.x (stores the feed of events)
- **Redis** version 6.2 (caching)
- imagePullSecret (our ECR repos are secured)
Keep in mind though that the pulled versions are not configured properly for production use. Customers of Netdata Cloud On-Prem are expected to configure these applications according to their needs and policies for production use. Netdata Cloud On-Prem can be configured to use all these applications as a shared resource from other existing production installations.

View File

Before

Width:  |  Height:  |  Size: 505 KiB

After

Width:  |  Height:  |  Size: 505 KiB

View File

@ -0,0 +1,212 @@
# Netdata Cloud On-Prem Installation
This installation guide assumes the prerequisites for installing Netdata Cloud On-Prem as satisfied. For more information please refer to the [requirements documentation](Netdata-Cloud-On-Prem.md#requirements).
## Installation Requirements
The following components are required to install Netdata Cloud On-Prem:
- **AWS** CLI
- **Helm** version 3.12+ with OCI Configuration (explained in the installation section)
- **Kubectl**
## Preparations for Installation
### Configure AWS CLI
Install [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
There are 2 options for configuring `aws cli` to work with the provided credentials. The first one is to set the environment variables:
```bash
export AWS_ACCESS_KEY_ID=<your_secret_id>
export AWS_SECRET_ACCESS_KEY=<your_secret_key>
```
The second one is to use an interactive shell:
```bash
aws configure
```
### Configure helm to use secured ECR repository
Using `aws` command we will generate a token for helm to access the secured ECR repository:
```bash
aws ecr get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin 362923047827.dkr.ecr.us-east-1.amazonaws.com/netdata-cloud-onprem
```
After this step you should be able to add the repository to your helm or just pull the helm chart:
```bash
helm pull oci://362923047827.dkr.ecr.us-east-1.amazonaws.com/netdata-cloud-dependency --untar #optional
helm pull oci://362923047827.dkr.ecr.us-east-1.amazonaws.com/netdata-cloud-onprem --untar
```
Local folders with the newest versions of helm charts should appear on your working dir.
## Installation
Netdata provides access to two helm charts:
1. `netdata-cloud-dependency` - required applications for `netdata-cloud-onprem`.
2. `netdata-cloud-onprem` - the application itself + provisioning
### netdata-cloud-dependency
This helm chart is designed to install the necessary applications:
- Redis
- Elasticsearch
- EMQX
- Apache Pulsar
- PostgreSQL
- Traefik
- Mailcatcher
- k8s-ecr-login-renew
- kubernetes-ingress
Although we provide an easy way to install all these applications, we expect users of Netdata Cloud On-Prem to provide production quality versions for them. Therefore, every configuration option is available through `values.yaml` in the folder that contains your netdata-cloud-dependency helm chart. All configuration options are described in `README.md` which is a part of the helm chart.
Each component can be enabled/disabled individually. It is done by true/false switches in `values.yaml`. This way, it is easier to migrate to production-grade components gradually.
Unless you prefer otherwise, `k8s-ecr-login-renew` is responsible for calling out the `AWS API` for token regeneration. This token is then injected into the secret that every node is using for authentication with secured ECR when pulling the images.
The default setting in `values.yaml` of `netdata-cloud-onprem` - `.global.imagePullSecrets` is configured to work out of the box with the dependency helm chart.
For helm chart installation - save your changes in `values.yaml` and execute:
```shell
cd [your helm chart location]
helm upgrade --wait --install netdata-cloud-dependency -n netdata-cloud --create-namespace -f values.yaml .
```
Keep in mind that `netdata-cloud-dependency` is provided only as a proof of concept. Users installing Netdata Cloud On-Prem should properly configure these components.
### netdata-cloud-onprem
Every configuration option is available in `values.yaml` in the folder that contains your `netdata-cloud-onprem` helm chart. All configuration options are described in the `README.md` which is a part of the helm chart.
#### Installing Netdata Cloud On-Prem
```shell
cd [your helm chart location]
helm upgrade --wait --install netdata-cloud-onprem -n netdata-cloud --create-namespace -f values.yaml .
```
##### Important notes
1. Installation takes care of provisioning the resources with migration services.
2. During the first installation, a secret called the `netdata-cloud-common` is created. It contains several randomly generated entries. Deleting helm chart is not going to delete this secret, nor reinstalling the whole On-Prem, unless manually deleted by kubernetes administrator. The content of this secret is extremely relevant - strings that are contained there are essential parts of encryption. Losing or changing the data that it contains will result in data loss.
## Short description of Netdata Cloud microservices
#### cloud-accounts-service
Responsible for user registration & authentication. Manages user account information.
#### cloud-agent-data-ctrl-service
Forwards request from the cloud to the relevant agents.
The requests include:
- Fetching chart metadata from the agent
- Fetching chart data from the agent
- Fetching function data from the agent
#### cloud-agent-mqtt-input-service
Forwards MQTT messages emitted by the agent related to the agent entities to the internal Pulsar broker. These include agent connection state updates.
#### cloud-agent-mqtt-output-service
Forwards Pulsar messages emitted in the cloud related to the agent entities to the MQTT broker. From there, the messages reach the relevant agent.
#### cloud-alarm-config-mqtt-input-service
Forwards MQTT messages emitted by the agent related to the alarm-config entities to the internal Pulsar broker. These include the data for the alarm configuration as seen by the agent.
#### cloud-alarm-log-mqtt-input-service
Forwards MQTT messages emitted by the agent related to the alarm-log entities to the internal Pulsar broker. These contain data about the alarm transitions that occurred in an agent.
#### cloud-alarm-mqtt-output-service
Forwards Pulsar messages emitted in the cloud related to the alarm entities to the MQTT broker. From there, the messages reach the relevant agent.
#### cloud-alarm-processor-service
Persists latest alert statuses received from the agent in the cloud.
Aggregates alert statuses from relevant node instances.
Exposes API endpoints to fetch alert data for visualization on the cloud.
Determines if notifications need to be sent when alert statuses change and emits relevant messages to Pulsar.
Exposes API endpoints to store and return notification-silencing data.
#### cloud-alarm-streaming-service
Responsible for starting the alert stream between the agent and the cloud.
Ensures that messages are processed in the correct order, and starts a reconciliation process between the cloud and the agent if out-of-order processing occurs.
#### cloud-charts-mqtt-input-service
Forwards MQTT messages emitted by the agent related to the chart entities to the internal Pulsar broker. These include the chart metadata that is used to display relevant charts on the cloud.
#### cloud-charts-mqtt-output-service
Forwards Pulsar messages emitted in the cloud related to the charts entities to the MQTT broker. From there, the messages reach the relevant agent.
#### cloud-charts-service
Exposes API endpoints to fetch the chart metadata.
Forwards data requests via the `cloud-agent-data-ctrl-service` to the relevant agents to fetch chart data points.
Exposes API endpoints to call various other endpoints on the agent, for instance, functions.
#### cloud-custom-dashboard-service
Exposes API endpoints to fetch and store custom dashboard data.
#### cloud-environment-service
Serves as the first contact point between the agent and the cloud.
Returns authentication and MQTT endpoints to connecting agents.
#### cloud-feed-service
Processes incoming feed events and stores them in Elasticsearch.
Exposes API endpoints to fetch feed events from Elasticsearch.
#### cloud-frontend
Contains the on-prem cloud website. Serves static content.
#### cloud-iam-user-service
Acts as a middleware for authentication on most of the API endpoints. Validates incoming token headers, injects the relevant ones, and forwards the requests.
#### cloud-metrics-exporter
Exports various metrics from an On-Prem Cloud installation. Uses the Prometheus metric exposition format.
#### cloud-netdata-assistant
Exposes API endpoints to fetch a human-friendly explanation of various netdata configuration options, namely the alerts.
#### cloud-node-mqtt-input-service
Forwards MQTT messages emitted by the agent related to the node entities to the internal Pulsar broker. These include the node metadata as well as their connectivity state, either direct or via parents.
#### cloud-node-mqtt-output-service
Forwards Pulsar messages emitted in the cloud related to the charts entities to the MQTT broker. From there, the messages reach the relevant agent.
#### cloud-notifications-dispatcher-service
Exposes API endpoints to handle integrations.
Handles incoming notification messages and uses the relevant channels(email, slack...) to notify relevant users.
#### cloud-spaceroom-service
Exposes API endpoints to fetch and store relations between agents, nodes, spaces, users, and rooms.
Acts as a provider of authorization for other cloud endpoints.
Exposes API endpoints to authenticate agents connecting to the cloud.

View File

@ -0,0 +1,70 @@
# Netdata Cloud On-Prem PoC without k8s
These instructions are about installing a light version of Netdata Cloud, for clients who do not have a Kubernetes cluster installed. This setup is **only for demonstration purposes**, as it has no built-in resiliency on failures of any kind.
## Requirements
- Ubuntu 22.04 (clean installation will work best).
- 10 CPU Cores and 24 GiB of memory.
- Access to shell as a sudo.
- TLS certificate for Netdata Cloud On-Prem PoC. A single endpoint is required. The certificate must be trusted by all entities connecting to this installation.
- AWS ID and License Key - we should have provided this to you, if not contact us: <info@netdata.cloud>.
To install the whole environment, log in to the designated host and run:
```bash
curl https://netdata-cloud-netdata-static-content.s3.amazonaws.com/provision.sh -o provision.sh
chmod +x provision.sh
sudo ./provision.sh install \
-key-id "" \
-access-key "" \
-onprem-license-key "" \
-onprem-license-subject "" \
-onprem-url "" \
-certificate-path "" \
-private-key-path ""
```
What does the script do during installation?
1. Prompts for user to provide:
- `-key-id` - AWS ECR access key ID.
- `-access-key` - AWS ECR Access Key.
- `-onprem-license-key` - Netdata Cloud On-Prem license key.
- `-onprem-license-subject` - Netdata Cloud On-Prem license subject.
- `-onprem-url` - URL for the On-prem (without http(s) protocol).
- `-certificate-path` - path to your PEM encoded certificate.
- `-private-key-path` - path to your PEM encoded key.
2. After all the above installation will begin. The script will install:
- Helm
- Kubectl
- AWS CLI
- K3s cluster (single node)
3. When all the required software is installed script starts to provision the K3s cluster with gathered data.
After cluster provisioning netdata is ready to be used.
> WARNING:
> This script will automatically expose not only netdata but also a mailcatcher under `<URL from point 1.>/mailcatcher`.
## How to log in?
Only login by mail can work without further configuration. Every mail this Netdata Cloud On-Prem sends, will appear on the mailcatcher, which acts as the SMTP server with a simple GUI to read the mails.
Steps:
1. Open Netdata Cloud On-Prem PoC in the web browser on URL you specified
2. Provide email and use the button to confirm
3. Mailcatcher will catch all the emails so go to `<URL from point 1.>/mailcatcher`. Find yours and click the link.
4. You are now logged into Netdata Cloud. Add your first nodes!
## How to remove Netdata Cloud On-Prem PoC?
To uninstall the whole PoC, use the same script that installed it, with the `uninstall` switch.
```shell
cd <script dir>
sudo ./provision.sh uninstall
```

View File

@ -0,0 +1,37 @@
# Netdata Cloud On-Prem Troubleshooting
Netdata Cloud is a sophisticated software piece relying on in multiple infrastructure components for its operation.
We assume that your team already manages and monitors properly the components Netdata Cloud depends upon, like the PostgreSQL, Redis and Elasticsearch databases, the Pulsar and EMQX message brokers, the traffic controllers (Ingress and Traefik) and of course the health of the Kubernetes cluster itself.
The following are questions that are usually asked by Netdata Cloud On-Prem operators.
## Loading charts takes a long time or ends with an error
The charts service is trying to collect data from the agents involved in the query. In most of the cases, this microservice queries many agents (depending on the room), and all of them have to reply for the query to be satisfied.
One or more of the following may be the cause:
1. **Slow Netdata Agent or Netdata Agents with unreliable connections**
If any of the Netdata agents queried is slow or has an unreliable network connection, the query will stall and Netdata Cloud will have timeout before responding.
When agents are overloaded or have unreliable connections, we suggest to install more Netdata Parents for providing reliable backends to Netdata Cloud. They will automatically be preferred for all queries, when available.
2. **Poor Kubernetes cluster management**
Another common issue is poor management of the Kubernetes cluster. When a node of a Kubernetes cluster is saturated, or the limits set to its containers are small, Netdata Cloud microservices get throttled by Kubernetes and does not get the resources required to process the responses of Netdata agents and aggregate the results for the dashboard.
We recommend to review the throttling of the containers and increase the limits if required.
3. **Saturated Database**
Slow responses may also indicate performance issues at the PostgreSQL database.
Please review the resources utilization of the database server (CPU, Memory, and Disk I/O) and take action to improve the situation.
4. **Messages pilling up in Pulsar**
Depending on the size of the infrastructure being monitored and the resources allocated to Pulsar and the microservices, messages may be pilling up. When this happens you may also experience that nodes status updates (online, offline, stale) are slow, or alerts transitions take time to appear on the dashboard.
We recommend to review Pulsar configuration and the resources allocated of the microservices, to ensure that there is no saturation.

View File

@ -0,0 +1,19 @@
# Netdata Cloud Versions
Netdata Cloud is provided in two versions:
- **SaaS**, we run and maintain Netdata Cloud and users use it to complement their observability with the additional features it provides.
- **On Prem**, we provide a licensed copy of the Netdata Cloud software, that users can install and run at their premises.
The pricing of both versions is similar, with the On-Prem version introducing a monthly fixed-fee for the extra support and packaging required when users are running Netdata Cloud by themselves.
For more information check our [Pricing](https://www.netdata.cloud/pricing/) page.
## SaaS Version
[Sign-up to Netdata Cloud](https://app.netdata.cloud) and start connecting your Netdata agents. The commands provided once you have signed up, include all the information to install and automatically connect (claim) Netdata agents to your Netdata Cloud space.
## On-Prem Version
To deploy Netdata Cloud On-premises, take a look at the [related section](https://github.com/netdata/netdata/blob/master/docs/netdata-cloud/netdata-cloud-on-prem/README.md) on our Documentation.

View File

@ -0,0 +1,19 @@
# Observability Centralization Points
Netdata supports the creation of multiple independent **Observability Centralization Points**, aggregating metric samples, logs and metadata within an infrastructure.
Observability Centralization Points are crucial for ensuring comprehensive monitoring and observability across an infrastructure, particularly under the following conditions:
1. **Ephemeral Systems**: For systems like Kubernetes nodes or ephemeral VMs that may not be persistently available, centralization points ensure that metrics and logs are not lost when these systems go offline. This is essential for maintaining historical data for analysis and troubleshooting.
2. **Resource Constraints**: In scenarios where the monitored systems lack sufficient resources (disk space or I/O bandwidth, CPU, RAM) to handle observability tasks effectively, centralization points offload these responsibilities, ensuring that production systems can operate efficiently without compromise.
3. **Multi-node Dashboards without Netdata Cloud**: For environments requiring aggregated views across multiple nodes but without the use of Netdata Cloud, Netdata Parents can aggregate this data to provide comprehensive dashboards, similar to what Netdata Cloud offers.
4. **Netdata Cloud Access Restrictions**: In cases where monitored systems cannot connect to Netdata Cloud (due to a firewall policy), a Netdata Parent can serve as a bridge, aggregating data and interfacing with Netdata Cloud on behalf of these restricted systems.
When multiple independent centralization points are available:
- Netdata Cloud queries all of them in parallel, to provide a unified infrastructure view.
- Without Netdata Cloud, the dashboards of each of the Netdata Parents provide unified views of the infrastructure aggregated to each of them (metrics and logs).

View File

@ -0,0 +1,39 @@
# Best Practices for Observability Centralization Points
When planning the deployment of Observability Centralization Points, the following factors need consideration:
1. **Volume of Monitored Systems**: The number of systems being monitored dictates the scaling and number of centralization points required. Larger infrastructures may necessitate multiple centralization points to manage the volume of data effectively and maintain performance.
2. **Cost of Data Transfer**: Particularly in multi-cloud or hybrid environments, the location of centralization points can significantly impact egress bandwidth costs. Strategically placing centralization points in each data center or cloud region can minimize these costs by reducing the need for cross-network data transfer.
3. **Usability without Netdata Cloud**: When not using Netdata Cloud, observability with Netdata is simpler when there are fewer centralization points, making it easier to remember where observability is and how to access it.
4. When Netdata Cloud is used, infrastructure level views are provided independently of the centralization points, so it is preferable to centralize as required for security (e.g. internet access), cost control (e.g. egress bandwidth, dedicated resources) and operational efficiency (regions, services or teams isolation).
## Cost Optimization
Netdata has been designed for observability cost optimization. For optimal cost we recommend using Netdata Cloud and multiple independent observability centralization points:
- **Scale out**: add more, smaller centralization points to distribute the load. This strategy provides the least resource consumption per unit of workload, maintaining optimal performance and resource efficiency across your observability infrastructure.
- **Use existing infrastructure resources**: use spare capacities before allocating dedicated resources for observability. This approach minimizes additional costs and promotes an economically sustainable observability framework.
- **Unified or separate centralization for logs and metrics**: Netdata allows centralizing metrics and logs together or separately. Consider factors such as access frequency, data retention policies, and compliance requirements to enhance performance and reduce costs.
- **Decentralized configuration management**: each Netdata centralization point can have its own unique configuration for retention and alerts. This enables 1) finer control on infrastructure costs and 2) localized control for separate services or teams.
## Pros and Cons
Compared to other observability solutions, the design of Netdata offers:
- **Enhanced Scalability and Flexibility**: Netdata's support for multiple independent observability centralization points allows for a more scalable and flexible architecture. This feature is particularly advantageous in distributed and complex environments, enabling tailored observability strategies that can vary by region, service, or team requirements.
- **Resilience and Fault Tolerance**: The ability to deploy multiple centralization points also contributes to greater system resilience and fault tolerance. Replication is a native feature of Netdata centralization points, so in the event of a failure at one centralization point, others can continue to function, ensuring continuous observability.
- **Optimized Cost and Performance**: By distributing the load across multiple centralization points, Netdata can optimize both performance and cost. This distribution allows for the efficient use of resources and help mitigate the bottlenecks associated with a single centralization point.
- **Simplicity**: Netdata agents (Children and Parents) require minimal configuration and maintenance, usually less than the configuration and maintenance required for the agents and exporters of other monitoring solutions. This provides an observability pipeline that has less moving parts and is easier to manage and maintain.
- **Always On-Prem**: Netdata centralization points are always on-prem. Even when Netdata Cloud is used, Netdata agents and parents are queried to provide the data required for the dashboards.
- **Bottom-Up Observability**: Netdata is designed to monitor systems, containers and applications bottom-up, aiming to provide the maximum resolution, visibility, depth and insights possible. Its ability to segment the infrastructure into multiple independent observability centralization points with customized retention, machine learning and alerts on each of them, while providing unified infrastructure level dashboards at Netdata Cloud, provides a flexible environment that can be tailored per service or team, while still being one unified infrastructure.

View File

@ -0,0 +1,7 @@
# Logs Centralization Points with systemd-journald
Logs centralization points can be built using the `systemd-journald` methodologies, by configuring `systemd-journal-remote` (on the centralization point) and `systemd-journal-upload` (on the production system).
The logs centralization points and the metrics centralization points do not need to be the same. For clarity and simplicity however, when not otherwise required for operational or regulatory reasons, we recommend to have unified centralization points for both metrics and logs.
A Netdata running at the logs centralization point, will automatically detect and present the logs of all servers aggregated to it in a unified way (i.e. logs from all servers multiplexed in the same view). This Netdata may or may not be a Netdata Parent for metrics.

View File

@ -0,0 +1,126 @@
# Active journal source without encryption
This page will guide you through creating an active journal source without the use of encryption.
Once you enable an active journal source on a server, `systemd-journal-gatewayd` will expose an REST API on TCP port 19531. This API can be used for querying the logs, exporting the logs, or monitoring new log entries, remotely.
> ⚠️ **IMPORTANT**<br/>
> These instructions will expose your logs to the network, without any encryption or authorization.<br/>
> DO NOT USE THIS ON NON-TRUSTED NETWORKS.
## Configuring an active journal source
On the server you want to expose their logs, install `systemd-journal-gateway`.
```bash
# change this according to your distro
sudo apt-get install systemd-journal-gateway
```
Optionally, if you want to change the port (the default is `19531`), edit `systemd-journal-gatewayd.socket`
```bash
# edit the socket file
sudo systemctl edit systemd-journal-gatewayd.socket
```
and add the following lines into the instructed place, and choose your desired port; save and exit.
```bash
[Socket]
ListenStream=<DESIRED_PORT>
```
Finally, enable it, so that it will start automatically upon receiving a connection:
```bash
# enable systemd-journal-remote
sudo systemctl daemon-reload
sudo systemctl enable --now systemd-journal-gatewayd.socket
```
## Using the active journal source
### Simple Logs Explorer
`systemd-journal-gateway` provides a simple HTML5 application to browse the logs.
To use it, open your web browser and navigate to:
```
http://server.ip:19531/browse
```
A simple page like this will be presented:
![image](https://github.com/netdata/netdata/assets/2662304/4da88bf8-6398-468b-a359-68db0c9ad419)
### Use it with `curl`
`man systemd-journal-gatewayd` documents the supported API methods and provides examples to query the API using `curl` commands.
### Copying the logs to a central journals server
`systemd-journal-remote` has the ability to query instances of `systemd-journal-gatewayd` to fetch their logs, so that the central server fetches the logs, instead of waiting for the individual servers to push their logs to it.
However, this kind of logs centralization has a key problem: **there is no guarantee that there will be no gaps in the logs replicated**. Theoretically, the REST API of `systemd-journal-gatewayd` supports querying past data, and `systemd-journal-remote` could keep track of the state of replication and automatically continue from the point it stopped last time. But it does not. So, currently the best logs centralization option is to use a **passive** centralization, where the clients push their logs to the server.
Given these limitations, if you still want to configure an **active** journals centralization, this is what you need to do:
On the centralization server install `systemd-journal-remote`:
```bash
# change this according to your distro
sudo apt-get install systemd-journal-remote
```
Then, copy `systemd-journal-remote.service` to configure it for querying the active source:
```bash
# replace "clientX" with the name of the active client node
sudo cp /lib/systemd/system/systemd-journal-remote.service /etc/systemd/system/systemd-journal-remote-clientX.service
# edit it to make sure it the ExecStart line is like this:
# ExecStart=/usr/lib/systemd/systemd-journal-remote --url http://clientX:19531/entries?follow
sudo nano /etc/systemd/system/systemd-journal-remote-clientX.service
# reload systemd
sudo systemctl daemon-reload
```
```bash
# enable systemd-journal-remote
sudo systemctl enable --now systemd-journal-remote-clientX.service
```
You can repeat this process to create as many `systemd-journal-remote` services, as the active source you have.
## Verify it works
To verify the central server is receiving logs, run this on the central server:
```bash
sudo ls -l /var/log/journal/remote/
```
You should see new files from the client's hostname or IP.
Also, any of the new service files (`systemctl status systemd-journal-clientX`) should show something like this:
```bash
● systemd-journal-clientX.service - Fetching systemd journal logs from 192.168.2.146
Loaded: loaded (/etc/systemd/system/systemd-journal-clientX.service; enabled; preset: disabled)
Drop-In: /usr/lib/systemd/system/service.d
└─10-timeout-abort.conf
Active: active (running) since Wed 2023-10-18 07:35:52 EEST; 23min ago
Main PID: 77959 (systemd-journal)
Tasks: 2 (limit: 6928)
Memory: 7.7M
CPU: 518ms
CGroup: /system.slice/systemd-journal-clientX.service
├─77959 /usr/lib/systemd/systemd-journal-remote --url "http://192.168.2.146:19531/entries?follow"
└─77962 curl "-HAccept: application/vnd.fdo.journal" --silent --show-error "http://192.168.2.146:19531/entries?follow"
Oct 18 07:35:52 systemd-journal-server systemd[1]: Started systemd-journal-clientX.service - Fetching systemd journal logs from 192.168.2.146.
Oct 18 07:35:52 systemd-journal-server systemd-journal-remote[77959]: Spawning curl http://192.168.2.146:19531/entries?follow...
```

View File

@ -0,0 +1,249 @@
# Passive journal centralization with encryption using self-signed certificates
This page will guide you through creating a **passive** journal centralization setup using **self-signed certificates** for encryption and authorization.
Once you centralize your infrastructure logs to a server, Netdata will automatically detect all the logs from all servers and organize them in sources. With the setup described in this document, on recent systemd versions, Netdata will automatically name all remote sources using the names of the clients, as they are described at their certificates (on older versions, the names will be IPs or reverse DNS lookups of the IPs).
A **passive** journal server waits for clients to push their metrics to it, so in this setup we will:
1. configure a certificates authority and issue self-signed certificates for your servers.
2. configure `systemd-journal-remote` on the server, to listen for incoming connections.
3. configure `systemd-journal-upload` on the clients, to push their logs to the server.
Keep in mind that the authorization involved works like this:
1. The server (`systemd-journal-remote`) validates that the client (`systemd-journal-upload`) uses a trusted certificate (a certificate issued by the same certificate authority as its own).
So, **the server will accept logs from any client having a valid certificate**.
2. The client (`systemd-journal-upload`) validates that the receiver (`systemd-journal-remote`) uses a trusted certificate (like the server does) and it also checks that the hostname or IP of the URL specified to its configuration, matches one of the names or IPs of the server it gets connected to. So, **the client does a validation that it connected to the right server**, using the URL hostname against the names and IPs of the server on its certificate.
This means, that if both certificates are issued by the same certificate authority, only the client can potentially reject the server.
## Self-signed certificates
To simplify the process of creating and managing self-signed certificates, we have created [this bash script](https://github.com/netdata/netdata/blob/master/src/collectors/systemd-journal.plugin/systemd-journal-self-signed-certs.sh).
This helps to also automate the distribution of the certificates to your servers (it generates a new bash script for each of your servers, which includes everything required, including the certificates).
We suggest to keep this script and all the involved certificates at the journals centralization server, in the directory `/etc/ssl/systemd-journal`, so that you can make future changes as required. If you prefer to keep the certificate authority and all the certificates at a more secure location, just use the script on that location.
On the server that will issue the certificates (usually the centralizaton server), do the following:
```bash
# install systemd-journal-remote to add the users and groups required and openssl for the certs
# change this according to your distro
sudo apt-get install systemd-journal-remote openssl
# download the script and make it executable
curl >systemd-journal-self-signed-certs.sh "https://raw.githubusercontent.com/netdata/netdata/master/src/collectors/systemd-journal.plugin/systemd-journal-self-signed-certs.sh"
chmod 750 systemd-journal-self-signed-certs.sh
```
To create certificates for your servers, run this:
```bash
sudo ./systemd-journal-self-signed-certs.sh "server1" "DNS:hostname1" "IP:10.0.0.1"
```
Where:
- `server1` is the canonical name of the server. On newer systemd version, this name will be used by `systemd-journal-remote` and Netdata when you view the logs on the dashboard.
- `DNS:hostname1` is a DNS name that the server is reachable at. Add `"DNS:xyz"` multiple times to define multiple DNS names for the server.
- `IP:10.0.0.1` is an IP that the server is reachable at. Add `"IP:xyz"` multiple times to define multiple IPs for the server.
Repeat this process to create the certificates for all your servers. You can add servers as required, at any time in the future.
Existing certificates are never re-generated. Typically certificates need to be revoked and new ones to be issued. But `systemd-journal-remote` tools do not support handling revocations. So, the only option you have to re-issue a certificate is to delete its files in `/etc/ssl/systemd-journal` and run the script again to create a new one.
Once you run the script of each of your servers, in `/etc/ssl/systemd-journal` you will find shell scripts named `runme-on-XXX.sh`, where `XXX` are the canonical names of your servers.
These `runme-on-XXX.sh` include everything to install the certificates, fix their file permissions to be accessible by `systemd-journal-remote` and `systemd-journal-upload`, and update `/etc/systemd/journal-remote.conf` and `/etc/systemd/journal-upload.conf`.
You can copy and paste (or `scp`) these scripts on your server and each of your clients:
```bash
sudo scp /etc/ssl/systemd-journal/runme-on-XXX.sh XXX:/tmp/
```
For the rest of this guide, we assume that you have copied the right `runme-on-XXX.sh` at the `/tmp` of all the servers for which you issued certificates.
### note about certificates file permissions
It is worth noting that `systemd-journal` certificates need to be owned by `systemd-journal-remote:systemd-journal`.
Both the user `systemd-journal-remote` and the group `systemd-journal` are automatically added by the `systemd-journal-remote` package. However, `systemd-journal-upload` (and `systemd-journal-gatewayd` - that is not used in this guide) use dynamic users. Thankfully they are added to the `systemd-journal` remote group.
So, by having the certificates owned by `systemd-journal-remote:systemd-journal`, satisfies both `systemd-journal-remote` which is not in the `systemd-journal` group, and `systemd-journal-upload` (and `systemd-journal-gatewayd`) which use dynamic users.
You don't need to do anything about it (the scripts take care of everything), but it is worth noting how this works.
## Server configuration
On the centralization server install `systemd-journal-remote`:
```bash
# change this according to your distro
sudo apt-get install systemd-journal-remote
```
Make sure the journal transfer protocol is `https`:
```bash
sudo cp /lib/systemd/system/systemd-journal-remote.service /etc/systemd/system/
# edit it to make sure it says:
# --listen-https=-3
# not:
# --listen-http=-3
sudo nano /etc/systemd/system/systemd-journal-remote.service
# reload systemd
sudo systemctl daemon-reload
```
Optionally, if you want to change the port (the default is `19532`), edit `systemd-journal-remote.socket`
```bash
# edit the socket file
sudo systemctl edit systemd-journal-remote.socket
```
and add the following lines into the instructed place, and choose your desired port; save and exit.
```bash
[Socket]
ListenStream=<DESIRED_PORT>
```
Next, run the `runme-on-XXX.sh` script on the server:
```bash
# if you run the certificate authority on the server:
sudo /etc/ssl/systemd-journal/runme-on-XXX.sh
# if you run the certificate authority elsewhere,
# assuming you have coped the runme-on-XXX.sh script (as described above):
sudo bash /tmp/runme-on-XXX.sh
```
This will install the certificates in `/etc/ssl/systemd-journal`, set the right file permissions, and update `/etc/systemd/journal-remote.conf` and `/etc/systemd/journal-upload.conf` to use the right certificate files.
Finally, enable it, so that it will start automatically upon receiving a connection:
```bash
# enable systemd-journal-remote
sudo systemctl enable --now systemd-journal-remote.socket
sudo systemctl enable systemd-journal-remote.service
```
`systemd-journal-remote` is now listening for incoming journals from remote hosts.
> When done, remember to `rm /tmp/runme-on-*.sh` to make sure your certificates are secure.
## Client configuration
On the clients, install `systemd-journal-remote` (it includes `systemd-journal-upload`):
```bash
# change this according to your distro
sudo apt-get install systemd-journal-remote
```
Edit `/etc/systemd/journal-upload.conf` and set the IP address and the port of the server, like so:
```conf
[Upload]
URL=https://centralization.server.ip:19532
```
Make sure that `centralization.server.ip` is one of the `DNS:` or `IP:` parameters you defined when you created the centralization server certificates. If it is not, the client may reject to connect.
Next, edit `systemd-journal-upload.service`, and add `Restart=always` to make sure the client will keep trying to push logs, even if the server is temporarily not there, like this:
```bash
sudo systemctl edit systemd-journal-upload.service
```
At the top, add:
```conf
[Service]
Restart=always
```
Enable `systemd-journal-upload.service`, like this:
```bash
sudo systemctl enable systemd-journal-upload.service
```
Assuming that you have in `/tmp` the relevant `runme-on-XXX.sh` script for this client, run:
```bash
sudo bash /tmp/runme-on-XXX.sh
```
This will install the certificates in `/etc/ssl/systemd-journal`, set the right file permissions, and update `/etc/systemd/journal-remote.conf` and `/etc/systemd/journal-upload.conf` to use the right certificate files.
Finally, restart `systemd-journal-upload.service`:
```bash
sudo systemctl restart systemd-journal-upload.service
```
The client should now be pushing logs to the central server.
> When done, remember to `rm /tmp/runme-on-*.sh` to make sure your certificates are secure.
Here it is in action, in Netdata:
![2023-10-18 16-23-05](https://github.com/netdata/netdata/assets/2662304/83bec232-4770-455b-8f1c-46b5de5f93a2)
## Verify it works
To verify the central server is receiving logs, run this on the central server:
```bash
sudo ls -l /var/log/journal/remote/
```
Depending on the `systemd` version you use, you should see new files from the clients' canonical names (as defined at their certificates) or IPs.
Also, `systemctl status systemd-journal-remote` should show something like this:
```bash
systemd-journal-remote.service - Journal Remote Sink Service
Loaded: loaded (/etc/systemd/system/systemd-journal-remote.service; indirect; preset: disabled)
Active: active (running) since Sun 2023-10-15 14:29:46 EEST; 2h 24min ago
TriggeredBy: ● systemd-journal-remote.socket
Docs: man:systemd-journal-remote(8)
man:journal-remote.conf(5)
Main PID: 2118153 (systemd-journal)
Status: "Processing requests..."
Tasks: 1 (limit: 154152)
Memory: 2.2M
CPU: 71ms
CGroup: /system.slice/systemd-journal-remote.service
└─2118153 /usr/lib/systemd/systemd-journal-remote --listen-https=-3 --output=/var/log/journal/remote/
```
Note the `status: "Processing requests..."` and the PID under `CGroup`.
On the client `systemctl status systemd-journal-upload` should show something like this:
```bash
● systemd-journal-upload.service - Journal Remote Upload Service
Loaded: loaded (/lib/systemd/system/systemd-journal-upload.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/systemd-journal-upload.service.d
└─override.conf
Active: active (running) since Sun 2023-10-15 10:39:04 UTC; 3h 17min ago
Docs: man:systemd-journal-upload(8)
Main PID: 4169 (systemd-journal)
Status: "Processing input..."
Tasks: 1 (limit: 13868)
Memory: 3.5M
CPU: 1.081s
CGroup: /system.slice/systemd-journal-upload.service
└─4169 /lib/systemd/systemd-journal-upload --save-state
```
Note the `Status: "Processing input..."` and the PID under `CGroup`.

View File

@ -0,0 +1,150 @@
# Passive journal centralization without encryption
This page will guide you through creating a passive journal centralization setup without the use of encryption.
Once you centralize your infrastructure logs to a server, Netdata will automatically detects all the logs from all servers and organize them in sources.
With the setup described in this document, journal files are identified by the IPs of the clients sending the logs. Netdata will automatically do
reverse DNS lookups to find the names of the server and name the sources on the dashboard accordingly.
A _passive_ journal server waits for clients to push their metrics to it, so in this setup we will:
1. configure `systemd-journal-remote` on the server, to listen for incoming connections.
2. configure `systemd-journal-upload` on the clients, to push their logs to the server.
> ⚠️ **IMPORTANT**<br/>
> These instructions will copy your logs to a central server, without any encryption or authorization.<br/>
> DO NOT USE THIS ON NON-TRUSTED NETWORKS.
## Server configuration
On the centralization server install `systemd-journal-remote`:
```bash
# change this according to your distro
sudo apt-get install systemd-journal-remote
```
Make sure the journal transfer protocol is `http`:
```bash
sudo cp /lib/systemd/system/systemd-journal-remote.service /etc/systemd/system/
# edit it to make sure it says:
# --listen-http=-3
# not:
# --listen-https=-3
sudo nano /etc/systemd/system/systemd-journal-remote.service
# reload systemd
sudo systemctl daemon-reload
```
Optionally, if you want to change the port (the default is `19532`), edit `systemd-journal-remote.socket`
```bash
# edit the socket file
sudo systemctl edit systemd-journal-remote.socket
```
and add the following lines into the instructed place, and choose your desired port; save and exit.
```bash
[Socket]
ListenStream=<DESIRED_PORT>
```
Finally, enable it, so that it will start automatically upon receiving a connection:
```bash
# enable systemd-journal-remote
sudo systemctl enable --now systemd-journal-remote.socket
sudo systemctl enable systemd-journal-remote.service
```
`systemd-journal-remote` is now listening for incoming journals from remote hosts.
## Client configuration
On the clients, install `systemd-journal-remote` (it includes `systemd-journal-upload`):
```bash
# change this according to your distro
sudo apt-get install systemd-journal-remote
```
Edit `/etc/systemd/journal-upload.conf` and set the IP address and the port of the server, like so:
```conf
[Upload]
URL=http://centralization.server.ip:19532
```
Edit `systemd-journal-upload`, and add `Restart=always` to make sure the client will keep trying to push logs, even if the server is temporarily not there, like this:
```bash
sudo systemctl edit systemd-journal-upload
```
At the top, add:
```conf
[Service]
Restart=always
```
Enable and start `systemd-journal-upload`, like this:
```bash
sudo systemctl enable systemd-journal-upload
sudo systemctl start systemd-journal-upload
```
## Verify it works
To verify the central server is receiving logs, run this on the central server:
```bash
sudo ls -l /var/log/journal/remote/
```
You should see new files from the client's IP.
Also, `systemctl status systemd-journal-remote` should show something like this:
```bash
systemd-journal-remote.service - Journal Remote Sink Service
Loaded: loaded (/etc/systemd/system/systemd-journal-remote.service; indirect; preset: disabled)
Active: active (running) since Sun 2023-10-15 14:29:46 EEST; 2h 24min ago
TriggeredBy: ● systemd-journal-remote.socket
Docs: man:systemd-journal-remote(8)
man:journal-remote.conf(5)
Main PID: 2118153 (systemd-journal)
Status: "Processing requests..."
Tasks: 1 (limit: 154152)
Memory: 2.2M
CPU: 71ms
CGroup: /system.slice/systemd-journal-remote.service
└─2118153 /usr/lib/systemd/systemd-journal-remote --listen-http=-3 --output=/var/log/journal/remote/
```
Note the `status: "Processing requests..."` and the PID under `CGroup`.
On the client `systemctl status systemd-journal-upload` should show something like this:
```bash
● systemd-journal-upload.service - Journal Remote Upload Service
Loaded: loaded (/lib/systemd/system/systemd-journal-upload.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/systemd-journal-upload.service.d
└─override.conf
Active: active (running) since Sun 2023-10-15 10:39:04 UTC; 3h 17min ago
Docs: man:systemd-journal-upload(8)
Main PID: 4169 (systemd-journal)
Status: "Processing input..."
Tasks: 1 (limit: 13868)
Memory: 3.5M
CPU: 1.081s
CGroup: /system.slice/systemd-journal-upload.service
└─4169 /lib/systemd/systemd-journal-upload --save-state
```
Note the `Status: "Processing input..."` and the PID under `CGroup`.

View File

@ -0,0 +1,48 @@
# Metrics Centralization Points (Netdata Parents)
```mermaid
flowchart BT
C1["Netdata Child 1"]
C2["Netdata Child 2"]
C3["Netdata Child N"]
P1["Netdata Parent 1"]
C1 -->|stream| P1
C2 -->|stream| P1
C3 -->|stream| P1
```
Netdata **Streaming and Replication** copies the recent past samples (replication) and in real-time all new samples collected (streaming) from production systems (Netdata Children) to metrics centralization points (Netdata Parents). The Netdata Parents then maintain the database for these metrics, according to their retention settings.
Each production system (Netdata Child) can stream to **only one** Netdata Parent at a time. The configuration allows configuring multiple Netdata Parents for high availability, but only the first found working will be used.
Netdata Parents receive metric samples **from multiple** production systems (Netdata Children) and have the option to re-stream them to another Netdata Parent. This allows building an infinite hierarchy of Netdata Parents. It also enables the configuration of Netdata Parents Clusters, for high availability.
| Feature | Netdata Child (production system) | Netdata Parent (centralization point) |
|:---------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------:|
| Metrics Retention | Can be minimized, or switched to mode `ram` or `alloc` to save resources. Some retention is required in case network errors introduce disconnects. | Common retention settings for all systems aggregated to it. |
| Machine Learning | Can be disabled (enabled by default). | Runs Anomaly Detection for all systems aggregated to it. |
| Alerts & Notifications | Can be disabled (enabled by default). | Runs health checks and sends notifications for all systems aggregated to it. |
| API and Dashboard | Can be disabled (enabled by default). | Serves the dashboard for all systems aggregated to it, using its own retention. |
| Exporting Metrics | Not required (enabled by default). | Exports the samples of all metrics collected by the systems aggregated to it. |
| Netdata Functions | Netdata Child must be online. | Forwards Functions requests to the Children connected to it. |
| Connection to Netdata Cloud | Not required. | Each Netdata Parent registers to Netdata Cloud all systems aggregated to it. |
## Supported Configurations
For Netdata Children:
1. **Full**: Full Netdata functionality is available at the Children. This means running machine learning, alerts, notifications, having the local dashboard available, and generally all Netdata features enabled. This is the default.
2. **Thin**: The Children are only collecting and forwarding metrics to a Parent. Some local retention may exist to avoid missing samples in case of network issues or Parent maintenance, but everything else is disabled.
For Netdata Parents:
1. **Standalone**: The Parent is standalone, either the only Parent available in the infrastructure, or the top-most of an hierarchy of Parents.
2. **Cluster**: The Parent is part of a cluster of Parents, all having the same data, from the same Children. A Cluster of Parents offers high-availability.
3. **Proxy**: The Parent receives metrics and stores them locally, but it also forwards them to a Grand Parent.
A Cluster is configured as a number of circular **Proxies**, ie. each of the nodes in a cluster has all the others configured as its Parents. So, if multiple levels of metrics centralization points (Netdata Parents) are required, only the top-most level can be a cluster.
## Best Practices
Refer to [Best Practices for Observability Centralization Points](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/best-practices.md).

View File

@ -0,0 +1,50 @@
# Clustering and High Availability of Netdata Parents
```mermaid
flowchart BT
C1["Netdata Child 1"]
C2["Netdata Child 2"]
C3["Netdata Child N"]
P1["Netdata Parent 1"]
P2["Netdata Parent 2"]
C1 & C2 & C3 -->|stream| P1
P1 -->|stream| P2
C1 & C2 & C3 .->|failover| P2
P2 .->|failover| P1
```
Netdata supports building Parent clusters of 2+ nodes. Clustering and high availability works like this:
1. All Netdata Children are configured to stream to all Netdata Parents. The first one found working will be used by each Netdata Child and the others will be automatically used if and when this connection is interrupted.
2. The Netdata Parents are configured to stream to all other Netdata Parents. For each of them, the first found working will be used and the others will be automatically used if and when this connection is interrupted.
All the Netdata Parents in such a cluster will receive all the metrics of all Netdata Children connected to any of them. They will also receive the metrics all the other Netdata Parents have.
In case there is a failure on any of the Netdata Parents, the Netdata Children connected to it will automatically failover to another available Netdata Parent, which now will attempt to re-stream all the metrics it receives to the other available Netdata Parents.
Netdata Cloud will receive registrations for all Netdata Children from all the Netdata Parents. As long as at least one of the Netdata Parents is connected to Netdata Cloud, all the Netdata Children will be available on Netdata Cloud.
Netdata Children need to maintain a retention only for the time required to switch Netdata Parents. When Netdata Children connect to a Netdata Parent, they negotiate the available retention and any missing data on the Netdata Parent are replicated from the Netdata Children.
## Restoring a Netdata Parent after maintenance
Given the [replication limitations](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/metrics-centralization-points/replication-of-past-samples.md#replication-limitations), special care is needed when restoring a Netdata Parent after some long maintenance work on it.
If the Netdata Children do not have enough retention to replicate the missing data on this Netdata Parent, it is preferable to block access to this Netdata Parent from the Netdata Children, until it replicates the missing data from the other Netdata Parents.
To block access from Netdata Children, and still allow access from other Netdata Parent siblings:
1. Use `iptables` to block access to port 19999 from Netdata Children to the restored Netdata Parent, or
2. Use separate streaming API keys (in `stream.conf`) for Netdata Children and Netdata Parents, and disable the API key used by Netdata Children, until the restored Netdata Parent has been synchronized.
## Duplicating a Parent
The easiest way is to `rsync` the directory `/var/cache/netdata` from the existing Netdata Parent to the new Netdata Parent.
> Important: Starting the new Netdata Parent with default settings, may delete the new files in `/var/cache/netdata` to apply the default disk size constraints. Therefore it is important to set the right retention settings in the new Netdata Parent before starting it up with the copied files.
To configure retention at the new Netdata Parent, set in `netdata.conf` the following to at least the values the old Netdata Parent has:
- `[db].dbengine multihost disk space MB`, this is the max disk size for `tier0`. The default is 256MiB.
- `[db].dbengine tier 1 multihost disk space MB`, this is the max disk space for `tier1`. The default is 50% of `tier0`.
- `[db].dbengine tier 2 multihost disk space MB`, this is the max disk space for `tier2`. The default is 50% of `tier1`.

View File

@ -0,0 +1,79 @@
# Configuring Metrics Centralization Points
Metrics streaming configuration for both Netdata Children and Parents is done via `stream.conf`.
`netdata.conf` and `stream.conf` have the same `ini` format, but `netdata.conf` is considered a non-sensitive file, while `stream.conf` contains API keys, IPs and other sensitive information that enable communication between Netdata agents.
`stream.conf` has 2 main sections:
- The `[stream]` section includes options for the **sending Netdata** (ie Netdata Children, or Netdata Parents that stream to Grand Parents, or to other sibling Netdata Parents in a cluster).
- The rest includes multiple sections that define API keys for the **receiving Netdata** (ie. Netdata Parents).
## Edit `stream.conf`
To edit `stream.conf`, run this on your terminal:
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config stream.conf
```
Your editor will open, with defaults and commented `stream.conf` options.
## Configuring a Netdata Parent
To enable the reception of metrics from Netdata Children, generate a random API key with this command:
```bash
uuidgen
```
Then, copy the UUID generated, [edit `stream.conf`](#edit-stream.conf), find the section that reads like the following and replace `API_KEY` with the UUID you generated:
```ini
[API_KEY]
# Accept metrics streaming from other Agents with the specified API key
enabled = yes
```
Save the file and restart Netdata.
## Configuring Netdata Children
To enable streaming metrics to a Netdata Parent, [edit `stream.conf`](#edit-stream.conf), and at the `[stream]` section at the top, set:
```ini
[stream]
# Stream metrics to another Netdata
enabled = yes
# The IP and PORT of the parent
destination = PARENT_IP_ADDRESS:19999
# The shared API key, generated by uuidgen
api key = API_KEY
```
Save the file and restart Netdata.
## Enable TLS/SSL Communication
## Troubleshooting Streaming Connections
You can find any issues related to streaming at Netdata logs.
### From the UI
Netdata logs to systemd-journald by default, and its logs are available at the `Logs` tab of the UI. At the `MESSAGE_ID` field look for `Netdata connection from child` and `Netdata connection to parent`.
### From the terminal
On the Parents:
```bash
journalctl -r --namespace=netdata MESSAGE_ID=ed4cdb8f1beb4ad3b57cb3cae2d162fa
```
On the Children:
```bash
journalctl -r --namespace=netdata MESSAGE_ID=6e2e3839067648968b646045dbf28d66
```

View File

@ -0,0 +1,70 @@
# FAQ on Metrics Centralization Points
## How much can a Netdata Parent node scale?
Netdata Parents generally scale well. According [to our tests](https://blog.netdata.cloud/netdata-vs-prometheus-performance-analysis/) Netdata Parents scale better than Prometheus for the same workload: -35% CPU utilization, -49% Memory Consumption, -12% Network Bandwidth, -98% Disk I/O, -75% Disk footprint.
For more information, Check [Sizing Netdata Parents](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/metrics-centralization-points/sizing-netdata-parents.md).
## If I set up a parents cluster, will I be able to have more Child nodes stream to them?
No. When you set up an active-active cluster, even if child nodes connect randomly to one or the other, all the parent nodes receive all the metrics of all the child nodes. So, all of them do all the work.
## How much retention do the child nodes need?
Child nodes need to have only the retention required in order to connect to another Parent if one fails or stops for maintenance.
- If you have a cluster of parents, 5 to 10 minutes in `alloc` mode is usually enough.
- If you have only 1 parent, it would be better to run the child nodes with `dbengine` so that they will have enough retention to back-fill the parent node if it stops for maintenance.
## Does streaming between child nodes and parents support encryption?
Yes. You can configure your parent nodes to enable TLS at their web server and configure the child nodes to connect with TLS to it. The streaming connection is also compressed, on top of TLS.
## Can I have an HTTP proxy between parent and child nodes?
No. The streaming protocol works on the same port as the internal web server of Netdata Agents, but the protocol is not HTTP-friendly and cannot be understood by HTTP proxy servers.
## Should I load balance multiple parents with a TCP load balancer?
Although this can be done and for streaming between child and parent nodes it could work, we recommend not doing it. It can lead to several kinds of problems.
It is better to configure all the parent nodes directly in the child nodes `stream.conf`. The child nodes will do everything in their power to find a parent node to connect and they will never give up.
## When I have multiple parents for the same children, will I receive alert notifications from all of them?
If all parents are configured to run health checks and trigger alerts, yes.
We recommend using Netdata Cloud to avoid receiving duplicate alert notifications. Netdata Cloud deduplicates alert notifications so that you will receive them only once.
## When I have only Parents connected to Netdata Cloud, will I be able to use the Functions feature on my child nodes?
Yes. Function requests will be received by the Parents and forwarded to the Child via their streaming connection. Function requests are propagated between parents, so this will work even if multiple levels of Netdata Parents are involved.
## If I have a cluster of parents and get one out for maintenance for a few hours, will it have missing data when it returns back online?
Check [Restoring a Netdata Parent after maintenance](https://github.com/netdata/netdata/blob/master/docs/observability-centralization-points/metrics-centralization-points/clustering-and-high-availability-of-netdata-parents.md).
## I have a cluster of parents. Which one is used by Netdata Cloud?
When there are multiple data sources for the same node, Netdata Cloud follows this strategy:
1. Netdata Cloud prefers Netdata agents having `live` data.
2. For time-series queries, when multiple Netdata agents have the retention required to answer the query, Netdata Cloud prefers the one that is further away from production systems.
3. For Functions, Netdata Cloud prefers Netdata agents that are closer to the production systems.
## Is there a way to balance child nodes to the parent nodes of a cluster?
Yes. When configuring the Parents at the Children `stream.conf`, configure them in different order. Children get connected to the first Parent they find available, so if the order given to them is different, they will spread the connections to the Parents available.
## Is there a way to get notified when a child gets disconnected?
It depends on the ephemerality setting of each Netdata Child.
1. **Permanent nodes**: These are nodes that should be available permanently and if they disconnect an alert should be triggered to notify you. By default, all nodes are considered permanent (not ephemeral).
2. **Ephemeral nodes**: These are nodes that are ephemeral by nature and they may shutdown at any point in time without any impact on the services you run.
To set the ephemeral flag on a node, edit its netdata.conf and in the `[health]` section set `is ephemeral = yes`. This setting is propagated to parent nodes and Netdata Cloud.
When using Netdata Cloud (via a parent or directly) and a permanent node gets disconnected, Netdata Cloud sends node disconnection notifications.

View File

@ -0,0 +1,60 @@
# Replication of Past Samples
Replication is triggered when a Netdata Child connects to a Netdata Parent. It replicates the latest samples of collected metrics a Netdata Parent may be missing. The goal of replication is to back-fill samples that were collected between disconnects and reconnects, so that the Netdata Parent does not have gaps on the charts for the time Netdata Children were disconnected.
The same replication mechanism is used between Netdata Parents (the sending Netdata is treated as a Child and the receiving Netdata as a Parent).
## Replication Limitations
The current implementation is optimized to replicate small durations and have minimal impact during reconnects. As a result it has the following limitations:
1. Replication can only append samples to metrics. Only missing samples at the end of each time-series are replicated.
2. Only `tier0` samples are replicated. Samples of higher tiers in Netdata are derived from `tier0` samples, and therefore there is no mechanism for ingesting them directly. This means that the maximum retention that can be replicated across Netdata is limited by the samples available in `tier0` of the sending Netdata.
3. Only samples of metrics that are currently being collected are replicated. Archived metrics (or even archived nodes) will be replicated when and if they are collected again. Netdata archives metrics 1 hour after they stop being collected, so Netdata Parents may miss data only if Netdata Children are disconnected for more than an hour from their Parents.
When multiple Netdata Parents are available, the replication happens in sequence, like in the following diagram.
```mermaid
sequenceDiagram
Child-->>Parent1: Connect
Parent1-->>Child: OK
Parent1-->>Parent2: Connect
Parent2-->>Parent1: OK
Child-->>Parent1: Metric M1 with retention up to Now
Parent1-->>Child: M1 stopped at -60sec, replicate up to Now
Child-->>Parent1: replicate M1 samples -60sec to Now
Child-->>Parent1: streaming M1
Parent1-->>Parent2: Metric M1 with retention up to Now
Parent2-->>Parent1: M1 stopped at -63sec, replicate up to Now
Parent1-->>Parent2: replicate M1 samples -63sec to Now
Parent1-->>Parent2: streaming M1
```
As shown in the diagram:
1. All connections are established immediately after a Netdata child connects to any of the Netdata Parents.
2. Each pair of connections (Child->Parent1, Parent1->Parent2) complete replication on the receiving side and then initiate replication on the sending side.
3. Replication pushes data up to Now, and the sending side immediately enters streaming mode, without leaving any gaps on the samples of the receiving side.
4. On every pair of connections, replication negotiates the retention of the receiving party to back-fill as much data as necessary.
## Configuration options for Replication
The following `netdata.conf` configuration parameters affect replication.
On the receiving side (Netdata Parent):
- `[db].seconds to replicate` limits the maximum time to be replicated. The default is 1 day (86400 seconds). Keep in mind that replication is also limited by the `tier0` retention the sending side has.
On the sending side (Netdata Children, or Netdata Parent when parents are clustered):
- `[db].replication threads` controls how many concurrent threads will be replicating metrics. The default is 1. Usually the performance is about 2 million samples per second per thread, so increasing this number may allow replication to progress faster between Netdata Parents.
- `[db].cleanup obsolete charts after secs` controls for how much time after metrics stop being collected will not be available for replication. The default is 1 hour (3600 seconds). If you plan to have scheduled maintenance on Netdata Parents of more than 1 hour, we recommend increasing this setting. Keep in mind however, that increasing this duration in highly ephemeral environments can have an impact on RAM utilization, since metrics will be considered as collected for longer durations.
## Monitoring Replication Progress
Inbound and outbound replication progress is reported at the dashboard using the Netdata Function `Streaming`, under the `Top` tab.
The same information is exposed via the API endpoint `http://agent-ip:19999/api/v2/node_instances` of both Netdata Parents and Children.

View File

@ -0,0 +1,3 @@
# Sizing Netdata Parents
To estimate CPU, RAM, and disk requirements for your Netdata Parents, check [sizing Netdata agents](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/sizing-netdata-agents/README.md).

View File

@ -86,7 +86,7 @@ RestartSec=5s
WantedBy=multi-user.target
```
You can edit the configuration file using the `edit-config` script from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
You can edit the configuration file using the `edit-config` script from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
- Edit `netdata.conf` and input:

View File

@ -0,0 +1,230 @@
# Security and Privacy Design
This document serves as the relevant Annex to the [Terms of Service](https://www.netdata.cloud/service-terms/),
the [Privacy Policy](https://www.netdata.cloud/privacy/) and
the Data Processing Addendum, when applicable. It provides more information regarding Netdatas technical and
organizational security and privacy measures.
We have given special attention to all aspects of Netdata, ensuring that everything throughout its operation is as
secure as possible. Netdata has been designed with security in mind.
## Netdata's Security Principles
### Security by Design
Netdata, an open-source software widely installed across the globe, prioritizes security by design, showcasing our
commitment to safeguarding user data. The entire structure and internal architecture of the software is built to ensure
maximum security. We aim to provide a secure environment from the ground up, rather than as an afterthought.
### Compliance with Open Source Security Foundation Best Practices
Netdata is committed to adhering to the best practices laid out by the Open Source Security Foundation (OSSF).
Currently, the Netdata Agent follows the OSSF best practices at the passing level. Feel free to audit our approach to
the [OSSF guidelines](https://bestpractices.coreinfrastructure.org/en/projects/2231)
Netdata Cloud boasts of comprehensive end-to-end automated testing, encompassing the UI, back-end, and agents, where
involved. In addition, the Netdata Agent uses an array of third-party services for static code analysis, static code
security analysis, and CI/CD integrations to ensure code quality on a per pull request basis. Tools like Github's
CodeQL, Github's Dependabot, our own unit tests, various types of linters,
and [Coverity](https://scan.coverity.com/projects/netdata-netdata?tab=overview) are utilized to this end.
Moreover, each PR requires two code reviews from our senior engineers before being merged. We also maintain two
high-performance environments (a production-like kubernetes cluster and a highly demanding stress lab) for
stress-testing our entire solution. This robust pipeline ensures the delivery of high-quality software consistently.
### Regular Third-Party Testing and Isolation
While Netdata doesn't have a dedicated internal security team, the open-source Netdata Agent undergoes regular testing
by third parties. Any security reports received are addressed immediately. In contrast, Netdata Cloud operates in a
fully automated and isolated environment with Infrastructure as Code (IaC), ensuring no direct access to production
applications. Monitoring and reporting is also fully automated.
### Security Vulnerability Response
Netdata has a transparent and structured process for handling security vulnerabilities. We appreciate and value the
contributions of security researchers and users who report vulnerabilities to us. All reports are thoroughly
investigated, and any identified vulnerabilities trigger a Security Release Process.
We aim to fully disclose any bugs as soon as a user mitigation is available, typically within a week of the report. In
case of security fixes, we promptly release a new version of the software. Users can subscribe to our releases on GitHub
to stay updated about all security incidents. More details about our vulnerability response process can be
found [here](https://github.com/netdata/netdata/security/policy).
### Adherence to Open Source Security Foundation Best Practices
In line with our commitment to security, we uphold the best practices as outlined by the Open Source Security
Foundation. This commitment reflects in every aspect of our operations, from the design phase to the release process,
ensuring the delivery of a secure and reliable product to our users. For more information
check [here](https://bestpractices.coreinfrastructure.org/en/projects/2231).
## Compliance with Regulations
Netdata is committed to ensuring the security, privacy, and integrity of user data. It complies with both the General
Data Protection Regulation (GDPR), a regulation in EU law on data protection and privacy, and the California Consumer
Privacy Act (CCPA), a state statute intended to enhance privacy rights and consumer protection for residents of
California.
### Compliance with GDPR and CCPA
Compliance with GDPR and CCPA are self-assessment processes, and Netdata has undertaken thorough internal audits and
controls to ensure it meets all requirements.
As per request basis, any customer may enter with Netdata into a data processing addendum (DPA) governing customers
ability to load and permit Netdata to process any personal data or information regulated under applicable data
protection laws, including the GDPR and CCPA.
### Data Transfers
While Netdata Agent itself does not engage in any cross-border data transfers, certain personal and infrastructure data
is transferred to Netdata Cloud for the purpose of providing its services. The metric data collected and processed by
Netdata Agents, however, stays strictly within the user's infrastructure, eliminating any concerns about cross-border
data transfer issues.
When users utilize Netdata Cloud, the metric data is streamed directly from the Netdata Agent to the users web browsers
via Netdata Cloud, without being stored on Netdata Cloud's servers. However, user identification data (such as email
addresses) and infrastructure metadata necessary for Netdata Cloud's operation are stored in data centers in the United
States, using compliant infrastructure providers such as Google Cloud and Amazon Web Services. These transfers and
storage are carried out in full compliance with applicable data protection laws, including GDPR and CCPA.
### Privacy Rights
Netdata ensures user privacy rights as mandated by the GDPR and CCPA. This includes the right to access, correct, and
delete personal data. These functions are all available online via the Netdata Cloud User Interface (UI). In case a user
wants to remove all personal information (email and activities), they can delete their cloud account by logging
into <https://app.netdata.cloud> and accessing their profile, at the bottom left of the screen.
### Regular Review and Updates
Netdata is dedicated to keeping its practices up-to-date with the latest developments in data protection regulations.
Therefore, as soon as updates or changes are made to these regulations, Netdata reviews and updates its policies and
practices accordingly to ensure continual compliance.
While Netdata is confident in its compliance with GDPR and CCPA, users are encouraged to review Netdata's privacy policy
and reach out with any questions or concerns they may have about data protection and privacy.
## Anonymous Statistics
The anonymous statistics collected by the Netdata Agent are related to the installations and not to individual users.
This data includes community size, types of plugins used, possible crashes, operating systems installed, and the use of
the registry feature. No IP addresses are collected, but each Netdata installation has a unique ID.
Netdata also collects anonymous telemetry events, which provide information on the usage of various features, errors,
and performance metrics. This data is used to understand how the software is being used and to identify areas for
improvement.
The purpose of collecting these statistics and telemetry data is to guide the development of the open-source agent,
focusing on areas that are most beneficial to users.
Users have the option to opt out of this data collection during the installation of the agent, or at any time by
removing a specific file from their system.
Netdata retains this data indefinitely in order to track changes and trends within the community over time.
Netdata does not share these anonymous statistics or telemetry data with any third parties.
By collecting this data, Netdata is able to continuously improve their service and identify any issues or areas for
improvement, while respecting user privacy and maintaining transparency.
## Internal Security Measures
Internal Security Measures at Netdata are designed with an emphasis on data privacy and protection. The measures
include:
1. **Infrastructure as Code (IaC)** :
Netdata Cloud follows the IaC model, which means it is a microservices environment that is completely isolated. All
changes are managed through Terraform, an open-source IaC software tool that provides a consistent CLI workflow for
managing cloud services.
2. **TLS Termination and IAM Service** :
At the edge of Netdata Cloud, there is a TLS termination, which provides the decryption point for incoming TLS
connections. Additionally, an Identity Access Management (IAM) service validates JWT tokens included in request
cookies or denies access to them.
3. **Session Identification** :
Once inside the microservices environment, all requests are associated with session IDs that identify the user making
the request. This approach provides additional layers of security and traceability.
4. **Data Storage** :
Data is stored in various NoSQL and SQL databases and message brokers. The entire environment is fully isolated,
providing a secure space for data management.
5. **Authentication** :
Netdata Cloud does not store credentials. It offers three types of authentication: GitHub Single Sign-On (SSO),
Google SSO, and email validation.
6. **DDoS Protection** :
Netdata Cloud has multiple protection mechanisms against Distributed Denial of Service (DDoS) attacks, including
rate-limiting and automated blacklisting.
7. **Security-Focused Development Process** :
To ensure a secure environment, Netdata employs a security-focused development process. This includes the use of
static code analysers to identify potential security vulnerabilities in the codebase.
8. **High Security Standards** :
Netdata Cloud maintains high security standards and can provide additional customization on a per contract basis.
9. **Employee Security Practices** :
Netdata ensures its employees follow security best practices, including role-based access, periodic access review,
and multi-factor authentication. This helps to minimize the risk of unauthorized access to sensitive data.
10. **Experienced Developers** :
Netdata hires senior developers with vast experience in security-related matters. It enforces two code reviews for
every Pull Request (PR), ensuring that any potential issues are identified and addressed promptly.
11. **DevOps Methodologies** :
Netdata's DevOps methodologies use the highest standards in access control in all places, utilizing the best
practices available.
12. **Risk-Based Security Program** :
Netdata has a risk-based security program that continually assesses and mitigates risks associated with data
security. This program helps maintain a secure environment for user data.
These security measures ensure that Netdata Cloud is a secure environment for users to monitor and troubleshoot their
systems. The company remains committed to continuously improving its security practices to safeguard user data
effectively.
## PCI DSS
PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards designed to ensure that all
companies that accept, process, store or transmit credit card information maintain a secure environment.
Netdata is committed to providing secure and privacy-respecting services, and it aligns its practices with many of the
key principles of the PCI DSS. However, it's important to clarify that Netdata is not officially certified as PCI
DSS-compliant. While Netdata follows practices that align with PCI DSS's key principles, the company itself has not
undergone the formal certification process for PCI DSS compliance.
PCI DSS compliance is not just about the technical controls but also involves a range of administrative and procedural
safeguards that go beyond the scope of Netdata's services. These include, among other things, maintaining a secure
network, implementing strong access control measures, regularly monitoring and testing networks, and maintaining an
information security policy.
Therefore, while Netdata can support entities with their data security needs in relation to PCI DSS, it is ultimately
the responsibility of the entity to ensure full PCI DSS compliance across all of their operations. Entities should
always consult with a legal expert or a PCI DSS compliance consultant to ensure that their use of any product, including
Netdata, aligns with PCI DSS regulations.
## HIPAA
HIPAA stands for the Health Insurance Portability and Accountability Act, which is a United States federal law enacted
in 1996. HIPAA is primarily focused on protecting the privacy and security of individuals' health information.
Netdata is committed to providing secure and privacy-respecting services, and it aligns its practices with many key
principles of HIPAA. However, it's important to clarify that Netdata is not officially certified as HIPAA-compliant.
While Netdata follows practices that align with HIPAA's key principles, the company itself has not undergone the formal
certification process for HIPAA compliance.
HIPAA compliance is not just about technical controls but also involves a range of administrative and procedural
safeguards that go beyond the scope of Netdata's services. These include, among other things, employee training,
physical security, and contingency planning.
Therefore, while Netdata can support HIPAA-regulated entities with their data security needs and is prepared to sign a
Business Associate Agreement (BAA), it is ultimately the responsibility of the healthcare entity to ensure full HIPAA
compliance across all of their operations. Entities should always consult with a legal expert or a HIPAA compliance
consultant to ensure that their use of any product, including Netdata, aligns with HIPAA regulations.
## Conclusion
In conclusion, Netdata Cloud's commitment to data security and user privacy is paramount. From the careful design of the
infrastructure and stringent internal security measures to compliance with international regulations and standards like
GDPR and CCPA, Netdata Cloud ensures a secure environment for users to monitor and troubleshoot their systems.
The use of advanced encryption techniques, role-based access control, and robust authentication methods further
strengthen the security of user data. Netdata Cloud also maintains transparency in its data handling practices, giving
users control over their data and the ability to easily access, retrieve, correct, and delete their personal data.
Netdata's approach to anonymous statistics collection respects user privacy while enabling the company to improve its
product based on real-world usage data. Even in such cases, users have the choice to opt-out, underlining Netdata's
respect for user autonomy.
In summary, Netdata Cloud offers a highly secure, user-centric environment for system monitoring and troubleshooting.
The company's emphasis on continuous security improvement and commitment to user privacy make it a trusted choice in the
data monitoring landscape.

View File

@ -0,0 +1,71 @@
# Netdata Agent Security and Privacy Design
## Security by Design
Netdata Agent is designed with a security-first approach. Its structure ensures data safety by only exposing chart
metadata and metric values, not the raw data collected. This design principle allows Netdata to be used in environments
requiring the highest level of data isolation, such as PCI Level 1. Even though Netdata plugins connect to a user's
database server or read application log files to collect raw data, only the processed metrics are stored in Netdata
databases, sent to upstream Netdata servers, or archived to external time-series databases.
## User Data Protection
The Netdata Agent is programmed to safeguard user data. When collecting data, the raw data does not leave the host. All
plugins, even those running with escalated capabilities or privileges, perform a hard-coded data collection job. They do
not accept commands from Netdata, and the original application data collected do not leave the process they are
collected in, are not saved, and are not transferred to the Netdata daemon. For the “Functions” feature, the data
collection plugins offer Functions, and the user interface merely calls them back as defined by the data collector. The
Netdata Agent main process does not require any escalated capabilities or privileges from the operating system, and
neither do most of the data collecting plugins.
## Communication and Data Encryption
Data collection plugins communicate with the main Netdata process via ephemeral, in-memory, pipes that are inaccessible
to any other process.
Streaming of metrics between Netdata agents requires an API key and can also be encrypted with TLS if the user
configures it.
The Netdata agent's web API can also use TLS if configured.
When Netdata agents are claimed to Netdata Cloud, the communication happens via MQTT over Web Sockets over TLS, and
public/private keys are used for authorizing access. These keys are exchanged during the claiming process (usually
during the provisioning of each agent).
## Authentication
Direct user access to the agent is not authenticated, considering that users should either use Netdata Cloud, or they
are already on the same LAN, or they have configured proper firewall policies. However, Netdata agents can be hidden
behind an authenticating web proxy if required.
For other Netdata agents streaming metrics to an agent, authentication via API keys is required and TLS can be used if
configured.
For Netdata Cloud accessing Netdata agents, public/private key cryptography is used and TLS is mandatory.
## Security Vulnerability Response
If a security vulnerability is found in the Netdata Agent, the Netdata team acknowledges and analyzes each report within
three working days, kicking off a Security Release Process. Any vulnerability information shared with the Netdata team
stays within the Netdata project and is not disseminated to other projects unless necessary for fixing the issue. The
reporter is kept updated as the security issue moves from triage to identified fix, to release planning. More
information can be found [here](https://github.com/netdata/netdata/security/policy).
## Protection Against Common Security Threats
The Netdata agent is resilient against common security threats such as DDoS attacks and SQL injections. For DDoS,
Netdata agent uses a fixed number of threads for processing requests, providing a cap on the resources that can be
consumed. It also automatically manages its memory to prevent overutilization. SQL injections are prevented as nothing
from the UI is passed back to the data collection plugins accessing databases.
Additionally, the Netdata agent is running as a normal, unprivileged, operating system user (a few data collections
require escalated privileges, but these privileges are isolated to just them), every netdata process runs by default
with a nice priority to protect production applications in case the system is starving for CPU resources, and Netdata
agents are configured by default to be the first processes to be killed by the operating system in case the operating
system starves for memory resources (OS-OOM - Operating System Out Of Memory events).
## User Customizable Security Settings
Netdata provides users with the flexibility to customize agent security settings. Users can configure TLS across the
system, and the agent provides extensive access control lists on all its interfaces to limit access to its endpoints
based on IP. Additionally, users can configure the CPU and Memory priority of Netdata agents.

View File

@ -0,0 +1,125 @@
# Netdata Cloud Security and Privacy Design
Netdata Cloud is designed with a security-first approach to ensure the highest level of protection for user data. When
using Netdata Cloud in environments that require compliance with standards like PCI DSS, SOC 2, or HIPAA, users can be
confident that all collected data is stored within their infrastructure. Data viewed on dashboards and alert
notifications travel over Netdata Cloud, but are not stored—instead, they're transformed in transit, aggregated from
multiple agents and parents (centralization points), to appear as one data source in the user's browser.
## User Identification and Authorization
Netdata Cloud requires only an email address to create an account and use the service. User identification and
authorization are conducted either via third-party integrations (Google, GitHub accounts) or through short-lived access
tokens sent to the users email account. Email addresses are stored securely in our production database on AWS and are
also used for product and marketing communications. Netdata Cloud does not store user credentials.
## Data Storage and Transfer
Although Netdata Cloud does not store metric data, it does keep some metadata for each node connected to user spaces.
This metadata includes the hostname, information from the `/api/v1/info` endpoint, metric metadata
from `/api/v1/contexts`, and alerts configurations from `/api/v1/alarms`. This data is securely stored in our production
database on AWS and copied to Google BigQuery for analytics purposes.
All data visible on Netdata Cloud is transferred through the Agent-Cloud link (ACLK) mechanism, which securely connects
a Netdata Agent to Netdata Cloud. The ACLK is encrypted and safe, and is only established if the user connects/claims
their node. Data in transit between a user and Netdata Cloud is encrypted using TLS.
## Data Retention and Erasure
Netdata Cloud maintains backups of customer content for approximately 90 days following a deletion. Users have the
ability to access, retrieve, correct, and delete personal data stored in Netdata Cloud. In case a user is unable to
delete personal data via self-services functionality, Netdata will delete personal data upon the customer's written
request, in accordance with applicable data protection law.
## Infrastructure and Authentication
Netdata Cloud operates on an Infrastructure as Code (IaC) model. Its microservices environment is completely isolated,
and all changes occur through Terraform. At the edge of Netdata Cloud, there is a TLS termination and an Identity and
Access Management (IAM) service that validates JWT tokens included in request cookies.
Netdata Cloud does not store user credentials.
## Security Features and Response
Netdata Cloud offers a variety of security features, including infrastructure-level dashboards, centralized alerts
notifications, auditing logs, and role-based access to different segments of the infrastructure. The cloud service
employs several protection mechanisms against DDoS attacks, such as rate-limiting and automated blacklisting. It also
uses static code analysers to prevent other types of attacks.
In the event of potential security vulnerabilities or incidents, Netdata Cloud follows the same process as the Netdata
agent. Every report is acknowledged and analyzed by the Netdata team within three working days, and the team keeps the
reporter updated throughout the process.
## User Customization
Netdata Cloud uses the highest level of security. There is no user customization available out of the box. Its security
settings are designed to provide maximum protection for all users. We are offering customization (like custom SSO
integrations, custom data retention policies, advanced user access controls, tailored audit logs, integration with other
security tools, etc.) on a per contract basis.
## Deleting Personal Data
Users who wish to remove all personal data (including email and activities) can delete their cloud account by logging
into Netdata Cloud and accessing their profile.
## User Privacy and Data Protection
Netdata Cloud is built with an unwavering commitment to user privacy and data protection. We understand that our users'
data is both sensitive and valuable, and we have implemented stringent measures to ensure its safety.
### Data Collection
Netdata Cloud collects minimal personal information from its users. The only personal data required to create an account
and use the service is an email address. This email address is used for product and marketing communications.
Additionally, the IP address used to access Netdata Cloud is stored in web proxy access logs.
### Data Usage
The collected email addresses are stored in our production database on Amazon Web Services (AWS) and copied to Google
BigQuery, our data lake, for analytics purposes. These analytics are crucial for our product development process. If a
user accepts the use of analytical cookies, their email address and IP are stored in the systems we use to track
application usage (Google Analytics, Posthog, and Gainsight PX). Subscriptions and Payments data are handled by Stripe.
### Data Sharing
Netdata Cloud does not share any personal data with third parties, ensuring the privacy of our users' data, but Netdata
Cloud does use third parties for its services, including, but not limited to, Google Cloud and Amazon Web Services for
its infrastructure, Stripe for payment processing, Google Analytics, Posthog and Gainsight PX for analytics.
### Data Protection
We use state-of-the-art security measures to protect user data from unauthorized access, use, or disclosure. All
infrastructure data visible on Netdata Cloud passes through the Agent-Cloud Link (ACLK) mechanism, which securely
connects a Netdata Agent to Netdata Cloud. The ACLK is encrypted, safe, and is only established if the user connects
their node. All data in transit between a user and Netdata Cloud is encrypted using TLS.
### User Control over Data
Netdata provides its users with the ability to access, retrieve, correct, and delete their personal data stored in
Netdata Cloud. This ability may occasionally be limited due to temporary service outages for maintenance or other
updates to Netdata Cloud, or when it is technically not feasible. If a customer is unable to delete personal data via
the self-services functionality, Netdata deletes the data upon the customer's written request, within the timeframe
specified in the Data Protection Agreement (DPA), and in accordance with applicable data protection laws.
### Compliance with Data Protection Laws
Netdata Cloud is fully compliant with data protection laws like the General Data Protection Regulation (GDPR) and the
California Consumer Privacy Act (CCPA).
### Data Transfer
Data transfer within Netdata Cloud is secure and respects the privacy of the user data. The Netdata Agent establishes an
outgoing secure WebSocket (WSS) connection to Netdata Cloud, ensuring that the data is encrypted when in transit.
### Use of Tracking Technologies
Netdata Cloud uses analytical cookies if a user consents to their use. These cookies are used to track the usage of the
application and are stored in systems like Google Analytics, Posthog and Gainsight PX.
### Data Breach Notification Process
In the event of a data breach, Netdata has a well-defined process in place for notifying users. The details of this
process align with the standard procedures and timelines defined in the Data Protection Agreement (DPA).
We continually review and update our privacy and data protection practices to ensure the highest level of data safety
and privacy for our users.

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Amazon SNS"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Discord"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Mattermost"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Microsoft Teams"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Netdata Mobile App"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Opsgenie"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "PagerDuty"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "RocketChat"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Slack"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Splunk"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Telegram"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/integrations/cl
meta_yaml: "https://github.com/netdata/netdata/edit/master/integrations/cloud-notifications/metadata.yaml"
sidebar_label: "Webhook"
learn_status: "Published"
learn_rel_path: "Alerting/Notifications/Centralized Cloud Notifications"
learn_rel_path: "Alerts & Notifications/Notifications/Centralized Cloud Notifications"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

View File

@ -146,7 +146,7 @@ def build_readme_from_integration(integration, mode=''):
meta_yaml = integration['edit_link'].replace("blob", "edit")
sidebar_label = integration['meta']['monitored_instance']['name']
learn_rel_path = generate_category_from_name(
integration['meta']['monitored_instance']['categories'][0].split("."), categories)
integration['meta']['monitored_instance']['categories'][0].split("."), categories).replace("Data Collection", "Collecting Metrics")
most_popular = integration['meta']['most_popular']
# build the markdown string
@ -198,7 +198,7 @@ endmeta-->
meta_yaml: "{meta_yaml}"
sidebar_label: "{sidebar_label}"
learn_status: "Published"
learn_rel_path: "Exporting"
learn_rel_path: "Exporting Metrics"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
endmeta-->
@ -230,7 +230,7 @@ endmeta-->
meta_yaml: "{meta_yaml}"
sidebar_label: "{sidebar_label}"
learn_status: "Published"
learn_rel_path: "{learn_rel_path.replace("notifications", "Alerting/Notifications")}"
learn_rel_path: "{learn_rel_path.replace("notifications", "Alerts & Notifications/Notifications")}"
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE NOTIFICATION'S metadata.yaml FILE"
endmeta-->

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -45,7 +45,7 @@ Configuration for this specific integration is located in the `[[ entry.setup.co
[% endif %]
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -215,7 +215,7 @@ There are three potential workarounds for this:
affect many projects other than just Netdata, and there are unfortunately a number of other services out there
that do not provide IPv6 connectivity, so taking this route is likely to save you time in the future as well.
2. If you are using a system that we publish native packages for (see our [platform support
policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md) for more details),
policy](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/versions-and-platforms.md) for more details),
you can manually set up our native package repositories as outlined in our [native package install
documentation](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/packages.md). Our official
package repositories do provide service over IPv6, so they work without issue on hosts without IPv4 connectivity.

View File

@ -21,7 +21,7 @@ You can install Netdata in one of the three following ways:
Each of these installation option requires [Homebrew](https://brew.sh/) for handling dependencies.
> The Netdata Homebrew package is community-created and -maintained.
> Community-maintained packages _may_ receive support from Netdata, but are only a best-effort affair. Learn more about [Netdata's platform support policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md).
> Community-maintained packages _may_ receive support from Netdata, but are only a best-effort affair. Learn more about [Netdata's platform support policy](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/versions-and-platforms.md).
## Install Netdata with our automatic one-line installation script

View File

@ -12,7 +12,7 @@ sidebar_position: 20
For most common Linux distributions that use either DEB or RPM packages, Netdata provides pre-built native packages
for current releases in-line with
our [official platform support policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md).
our [official platform support policy](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/versions-and-platforms.md).
These packages will be used by default when attempting to install on a supported platform using our
[kickstart.sh installer script](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md).

View File

@ -14,5 +14,5 @@ If you have a standard environment that is not yet listed here, just use the
[one line installer kickstart.sh](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md)
If your environment is somewhat old or unusual, check our
[platform support policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md).
[platform support policy](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/versions-and-platforms.md).

View File

@ -45,260 +45,44 @@ If you don't see the app/service you'd like to monitor in this list:
<!-- AUTOGENERATED PART BY integrations/gen_doc_collector_page.py SCRIPT, DO NOT EDIT MANUALLY -->
### APM
- [Alamos FE2 server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/alamos_fe2_server.md)
- [Apache Airflow](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/apache_airflow.md)
- [Apache Flink](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/apache_flink.md)
- [Audisto](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/audisto.md)
- [Dependency-Track](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dependency-track.md)
- [Go applications (EXPVAR)](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/go_expvar/integrations/go_applications_expvar.md)
- [Google Pagespeed](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/google_pagespeed.md)
- [IBM AIX systems Njmon](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ibm_aix_systems_njmon.md)
- [JMX](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/jmx.md)
- [Java Spring-boot 2 applications](https://github.com/netdata/go.d.plugin/blob/master/modules/springboot2/integrations/java_spring-boot_2_applications.md)
- [NRPE daemon](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nrpe_daemon.md)
- [Sentry](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sentry.md)
- [Sysload](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sysload.md)
- [VSCode](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/vscode.md)
- [YOURLS URL Shortener](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/yourls_url_shortener.md)
- [bpftrace variables](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/bpftrace_variables.md)
- [gpsd](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/gpsd.md)
- [jolokia](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/jolokia.md)
- [phpDaemon](https://github.com/netdata/go.d.plugin/blob/master/modules/phpdaemon/integrations/phpdaemon.md)
### Authentication and Authorization
- [Fail2ban](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/fail2ban/integrations/fail2ban.md)
- [FreeRADIUS](https://github.com/netdata/go.d.plugin/blob/master/modules/freeradius/integrations/freeradius.md)
- [HashiCorp Vault secrets](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hashicorp_vault_secrets.md)
- [LDAP](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ldap.md)
- [OpenLDAP (community)](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openldap_community.md)
- [OpenLDAP](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/openldap/integrations/openldap.md)
- [RADIUS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/radius.md)
- [SSH](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ssh.md)
- [TACACS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tacacs.md)
### Blockchain Servers
- [Chia](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/chia.md)
- [Crypto exchanges](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/crypto_exchanges.md)
- [Cryptowatch](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cryptowatch.md)
- [Energi Core Wallet](https://github.com/netdata/go.d.plugin/blob/master/modules/energid/integrations/energi_core_wallet.md)
- [Go-ethereum](https://github.com/netdata/go.d.plugin/blob/master/modules/geth/integrations/go-ethereum.md)
- [Helium miner (validator)](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/helium_miner_validator.md)
- [IOTA full node](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/iota_full_node.md)
- [Sia](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sia.md)
### CICD Platforms
- [Concourse](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/concourse.md)
- [GitLab Runner](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/gitlab_runner.md)
- [Jenkins](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/jenkins.md)
- [Puppet](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/puppet/integrations/puppet.md)
### Cloud Provider Managed
- [AWS EC2 Compute instances](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_ec2_compute_instances.md)
- [AWS EC2 Spot Instance](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_ec2_spot_instance.md)
- [AWS ECS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_ecs.md)
- [AWS Health events](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_health_events.md)
- [AWS Quota](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_quota.md)
- [AWS S3 buckets](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_s3_buckets.md)
- [AWS SQS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_sqs.md)
- [AWS instance health](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_instance_health.md)
- [Akamai Global Traffic Management](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/akamai_global_traffic_management.md)
- [Akami Cloudmonitor](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/akami_cloudmonitor.md)
- [Alibaba Cloud](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/alibaba_cloud.md)
- [ArvanCloud CDN](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/arvancloud_cdn.md)
- [Azure AD App passwords](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/azure_ad_app_passwords.md)
- [Azure Elastic Pool SQL](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/azure_elastic_pool_sql.md)
- [Azure Resources](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/azure_resources.md)
- [Azure SQL](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/azure_sql.md)
- [Azure Service Bus](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/azure_service_bus.md)
- [Azure application](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/azure_application.md)
- [BigQuery](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/bigquery.md)
- [CloudWatch](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cloudwatch.md)
- [Dell EMC ECS cluster](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dell_emc_ecs_cluster.md)
- [DigitalOcean](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/digitalocean.md)
- [GCP GCE](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/gcp_gce.md)
- [GCP Quota](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/gcp_quota.md)
- [Google Cloud Platform](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/google_cloud_platform.md)
- [Google Stackdriver](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/google_stackdriver.md)
- [Linode](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/linode.md)
- [Lustre metadata](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/lustre_metadata.md)
- [Nextcloud servers](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nextcloud_servers.md)
- [OpenStack](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openstack.md)
- [Zerto](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/zerto.md)
### Containers and VMs
- [Containers](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/containers.md)
- [Docker Engine](https://github.com/netdata/go.d.plugin/blob/master/modules/docker_engine/integrations/docker_engine.md)
- [Docker Hub repository](https://github.com/netdata/go.d.plugin/blob/master/modules/dockerhub/integrations/docker_hub_repository.md)
- [Docker](https://github.com/netdata/go.d.plugin/blob/master/modules/docker/integrations/docker.md)
- [LXC Containers](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/lxc_containers.md)
- [Libvirt Containers](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/libvirt_containers.md)
- [NSX-T](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nsx-t.md)
- [Podman](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/podman.md)
- [Proxmox Containers](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/proxmox_containers.md)
- [Proxmox VE](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/proxmox_ve.md)
- [VMware vCenter Server](https://github.com/netdata/go.d.plugin/blob/master/modules/vsphere/integrations/vmware_vcenter_server.md)
- [Virtual Machines](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/virtual_machines.md)
- [Xen XCP-ng](https://github.com/netdata/netdata/blob/master/src/collectors/xenstat.plugin/integrations/xen_xcp-ng.md)
- [cAdvisor](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cadvisor.md)
- [oVirt Containers](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/ovirt_containers.md)
- [vCenter Server Appliance](https://github.com/netdata/go.d.plugin/blob/master/modules/vcsa/integrations/vcenter_server_appliance.md)
### Databases
- [4D Server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/4d_server.md)
- [AWS RDS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aws_rds.md)
- [Cassandra](https://github.com/netdata/go.d.plugin/blob/master/modules/cassandra/integrations/cassandra.md)
- [ClickHouse](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/clickhouse.md)
- [ClusterControl CMON](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/clustercontrol_cmon.md)
- [CockroachDB](https://github.com/netdata/go.d.plugin/blob/master/modules/cockroachdb/integrations/cockroachdb.md)
- [CouchDB](https://github.com/netdata/go.d.plugin/blob/master/modules/couchdb/integrations/couchdb.md)
- [Couchbase](https://github.com/netdata/go.d.plugin/blob/master/modules/couchbase/integrations/couchbase.md)
- [HANA](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hana.md)
- [Hasura GraphQL Server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hasura_graphql_server.md)
- [InfluxDB](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/influxdb.md)
- [Machbase](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/machbase.md)
- [MariaDB](https://github.com/netdata/go.d.plugin/blob/master/modules/mysql/integrations/mariadb.md)
- [Memcached (community)](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/memcached_community.md)
- [Memcached](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/memcached/integrations/memcached.md)
- [MongoDB](https://github.com/netdata/go.d.plugin/blob/master/modules/mongodb/integrations/mongodb.md)
- [MySQL](https://github.com/netdata/go.d.plugin/blob/master/modules/mysql/integrations/mysql.md)
- [ODBC](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/odbc.md)
- [Oracle DB (community)](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/oracle_db_community.md)
- [Oracle DB](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/oracledb/integrations/oracle_db.md)
- [Patroni](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/patroni.md)
- [Percona MySQL](https://github.com/netdata/go.d.plugin/blob/master/modules/mysql/integrations/percona_mysql.md)
- [PgBouncer](https://github.com/netdata/go.d.plugin/blob/master/modules/pgbouncer/integrations/pgbouncer.md)
- [Pgpool-II](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/pgpool-ii.md)
- [Pika](https://github.com/netdata/go.d.plugin/blob/master/modules/pika/integrations/pika.md)
- [PostgreSQL](https://github.com/netdata/go.d.plugin/blob/master/modules/postgres/integrations/postgresql.md)
- [ProxySQL](https://github.com/netdata/go.d.plugin/blob/master/modules/proxysql/integrations/proxysql.md)
- [Redis](https://github.com/netdata/go.d.plugin/blob/master/modules/redis/integrations/redis.md)
- [RethinkDB](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/rethinkdbs/integrations/rethinkdb.md)
- [RiakKV](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/riakkv/integrations/riakkv.md)
- [SQL Database agnostic](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sql_database_agnostic.md)
- [Vertica](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/vertica.md)
- [Warp10](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/warp10.md)
- [pgBackRest](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/pgbackrest.md)
### Distributed Computing Systems
- [BOINC](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/boinc/integrations/boinc.md)
@ -307,36 +91,10 @@ If you don't see the app/service you'd like to monitor in this list:
### DNS and DHCP Servers
- [Akamai Edge DNS Traffic](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/akamai_edge_dns_traffic.md)
- [CoreDNS](https://github.com/netdata/go.d.plugin/blob/master/modules/coredns/integrations/coredns.md)
- [DNS query](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsquery/integrations/dns_query.md)
- [DNSBL](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dnsbl.md)
- [DNSdist](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsdist/integrations/dnsdist.md)
- [Dnsmasq DHCP](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsmasq_dhcp/integrations/dnsmasq_dhcp.md)
- [Dnsmasq](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsmasq/integrations/dnsmasq.md)
- [ISC Bind (RNDC)](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/bind_rndc/integrations/isc_bind_rndc.md)
- [ISC DHCP](https://github.com/netdata/go.d.plugin/blob/master/modules/isc_dhcpd/integrations/isc_dhcp.md)
- [Name Server Daemon](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/nsd/integrations/name_server_daemon.md)
- [NextDNS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nextdns.md)
- [Pi-hole](https://github.com/netdata/go.d.plugin/blob/master/modules/pihole/integrations/pi-hole.md)
- [PowerDNS Authoritative Server](https://github.com/netdata/go.d.plugin/blob/master/modules/powerdns/integrations/powerdns_authoritative_server.md)
- [PowerDNS Recursor](https://github.com/netdata/go.d.plugin/blob/master/modules/powerdns_recursor/integrations/powerdns_recursor.md)
- [Unbound](https://github.com/netdata/go.d.plugin/blob/master/modules/unbound/integrations/unbound.md)
### eBPF
- [eBPF Cachestat](https://github.com/netdata/netdata/blob/master/src/collectors/ebpf.plugin/integrations/ebpf_cachestat.md)
@ -375,10 +133,6 @@ If you don't see the app/service you'd like to monitor in this list:
### FreeBSD
- [FreeBSD NFS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/freebsd_nfs.md)
- [FreeBSD RCTL-RACCT](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/freebsd_rctl-racct.md)
- [dev.cpu.0.freq](https://github.com/netdata/netdata/blob/master/src/collectors/freebsd.plugin/integrations/dev.cpu.0.freq.md)
- [dev.cpu.temperature](https://github.com/netdata/netdata/blob/master/src/collectors/freebsd.plugin/integrations/dev.cpu.temperature.md)
@ -439,200 +193,44 @@ If you don't see the app/service you'd like to monitor in this list:
- [zfs](https://github.com/netdata/netdata/blob/master/src/collectors/freebsd.plugin/integrations/zfs.md)
### FTP Servers
- [ProFTPD](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/proftpd.md)
### Gaming
- [BungeeCord](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/bungeecord.md)
- [CS:GO](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cs:go.md)
- [Minecraft](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/minecraft.md)
- [OpenRCT2](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openrct2.md)
- [SpigotMC](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/spigotmc/integrations/spigotmc.md)
- [Steam](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/steam.md)
### Generic Data Collection
- [Custom Exporter](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/custom_exporter.md)
- [Excel spreadsheet](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/excel_spreadsheet.md)
- [Generic Command Line Output](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/generic_command_line_output.md)
- [JetBrains Floating License Server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/jetbrains_floating_license_server.md)
- [OpenWeatherMap](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openweathermap.md)
- [Pandas](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/pandas/integrations/pandas.md)
- [Prometheus endpoint](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/prometheus_endpoint.md)
- [SNMP devices](https://github.com/netdata/go.d.plugin/blob/master/modules/snmp/integrations/snmp_devices.md)
- [Shell command](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/shell_command.md)
- [Tankerkoenig API](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tankerkoenig_api.md)
- [TwinCAT ADS Web Service](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/twincat_ads_web_service.md)
### Hardware Devices and Sensors
- [1-Wire Sensors](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/w1sensor/integrations/1-wire_sensors.md)
- [AM2320](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/am2320/integrations/am2320.md)
- [AMD CPU & GPU](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/amd_cpu_&_gpu.md)
- [AMD GPU](https://github.com/netdata/netdata/blob/master/src/collectors/proc.plugin/integrations/amd_gpu.md)
- [ARM HWCPipe](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/arm_hwcpipe.md)
- [CUPS](https://github.com/netdata/netdata/blob/master/src/collectors/cups.plugin/integrations/cups.md)
- [HDD temperature](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/hddtemp/integrations/hdd_temperature.md)
- [HP iLO](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hp_ilo.md)
- [IBM CryptoExpress (CEX) cards](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ibm_cryptoexpress_cex_cards.md)
- [IBM Z Hardware Management Console](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ibm_z_hardware_management_console.md)
- [IPMI (By SoundCloud)](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ipmi_by_soundcloud.md)
- [Intelligent Platform Management Interface (IPMI)](https://github.com/netdata/netdata/blob/master/src/collectors/freeipmi.plugin/integrations/intelligent_platform_management_interface_ipmi.md)
- [Linux Sensors (lm-sensors)](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/sensors/integrations/linux_sensors_lm-sensors.md)
- [Linux Sensors (sysfs)](https://github.com/netdata/netdata/blob/master/src/collectors/charts.d.plugin/sensors/integrations/linux_sensors_sysfs.md)
- [NVML](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nvml.md)
- [Nvidia GPU](https://github.com/netdata/go.d.plugin/blob/master/modules/nvidia_smi/integrations/nvidia_gpu.md)
- [Raritan PDU](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/raritan_pdu.md)
- [S.M.A.R.T.](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/smartd_log/integrations/s.m.a.r.t..md)
- [ServerTech](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/servertech.md)
- [Siemens S7 PLC](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/siemens_s7_plc.md)
- [T-Rex NVIDIA GPU Miner](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/t-rex_nvidia_gpu_miner.md)
### IoT Devices
- [Airthings Waveplus air sensor](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/airthings_waveplus_air_sensor.md)
- [Bobcat Miner 300](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/bobcat_miner_300.md)
- [Christ Elektronik CLM5IP power panel](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/christ_elektronik_clm5ip_power_panel.md)
- [CraftBeerPi](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/craftbeerpi.md)
- [Dutch Electricity Smart Meter](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dutch_electricity_smart_meter.md)
- [Elgato Key Light devices.](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/elgato_key_light_devices..md)
- [Energomera smart power meters](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/energomera_smart_power_meters.md)
- [Helium hotspot](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/helium_hotspot.md)
- [Homebridge](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/homebridge.md)
- [Homey](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/homey.md)
- [Jarvis Standing Desk](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/jarvis_standing_desk.md)
- [MP707 USB thermometer](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mp707_usb_thermometer.md)
- [Modbus protocol](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/modbus_protocol.md)
- [Monnit Sensors MQTT](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/monnit_sensors_mqtt.md)
- [Nature Remo E lite devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nature_remo_e_lite_devices.md)
- [Netatmo sensors](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/netatmo_sensors.md)
- [OpenHAB](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openhab.md)
- [Personal Weather Station](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/personal_weather_station.md)
- [Philips Hue](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/philips_hue.md)
- [Pimoroni Enviro+](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/pimoroni_enviro+.md)
- [Powerpal devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/powerpal_devices.md)
- [Radio Thermostat](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/radio_thermostat.md)
- [SMA Inverters](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sma_inverters.md)
- [Salicru EQX inverter](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/salicru_eqx_inverter.md)
- [Sense Energy](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sense_energy.md)
- [Shelly humidity sensor](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/shelly_humidity_sensor.md)
- [Smart meters SML](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/smart_meters_sml.md)
- [Solar logging stick](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/solar_logging_stick.md)
- [SolarEdge inverters](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/solaredge_inverters.md)
- [Solis Ginlong 5G inverters](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/solis_ginlong_5g_inverters.md)
- [Sunspec Solar Energy](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sunspec_solar_energy.md)
- [TP-Link P110](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tp-link_p110.md)
- [Tado smart heating solution](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tado_smart_heating_solution.md)
- [Tesla Powerwall](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tesla_powerwall.md)
- [Tesla Wall Connector](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tesla_wall_connector.md)
- [Tesla vehicle](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/tesla_vehicle.md)
- [Xiaomi Mi Flora](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/xiaomi_mi_flora.md)
- [iqAir AirVisual air quality monitors](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/iqair_airvisual_air_quality_monitors.md)
### Kubernetes
- [Cilium Agent](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cilium_agent.md)
- [Cilium Operator](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cilium_operator.md)
- [Cilium Proxy](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cilium_proxy.md)
- [Kubelet](https://github.com/netdata/go.d.plugin/blob/master/modules/k8s_kubelet/integrations/kubelet.md)
- [Kubeproxy](https://github.com/netdata/go.d.plugin/blob/master/modules/k8s_kubeproxy/integrations/kubeproxy.md)
- [Kubernetes Cluster Cloud Cost](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/kubernetes_cluster_cloud_cost.md)
- [Kubernetes Cluster State](https://github.com/netdata/go.d.plugin/blob/master/modules/k8s_state/integrations/kubernetes_cluster_state.md)
- [Kubernetes Containers](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/kubernetes_containers.md)
- [Rancher](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/rancher.md)
### Linux Systems
- [CPU performance](https://github.com/netdata/netdata/blob/master/src/collectors/perf.plugin/integrations/cpu_performance.md)
- [Disk space](https://github.com/netdata/netdata/blob/master/src/collectors/diskspace.plugin/integrations/disk_space.md)
- [Files and directories](https://github.com/netdata/go.d.plugin/blob/master/modules/filecheck/integrations/files_and_directories.md)
- [OpenRC](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openrc.md)
#### CPU
- [Interrupts](https://github.com/netdata/netdata/blob/master/src/collectors/proc.plugin/integrations/interrupts.md)
@ -669,8 +267,6 @@ If you don't see the app/service you'd like to monitor in this list:
- [Synproxy](https://github.com/netdata/netdata/blob/master/src/collectors/proc.plugin/integrations/synproxy.md)
- [nftables](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nftables.md)
#### IPC
- [Inter Process Communication](https://github.com/netdata/netdata/blob/master/src/collectors/proc.plugin/integrations/inter_process_communication.md)
@ -743,182 +339,32 @@ If you don't see the app/service you'd like to monitor in this list:
- [System statistics](https://github.com/netdata/netdata/blob/master/src/collectors/proc.plugin/integrations/system_statistics.md)
### Logs Servers
- [AuthLog](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/authlog.md)
- [Fluentd](https://github.com/netdata/go.d.plugin/blob/master/modules/fluentd/integrations/fluentd.md)
- [Graylog Server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/graylog_server.md)
- [Logstash](https://github.com/netdata/go.d.plugin/blob/master/modules/logstash/integrations/logstash.md)
- [journald](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/journald.md)
- [loki](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/loki.md)
- [mtail](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mtail.md)
### macOS Systems
- [Apple Time Machine](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/apple_time_machine.md)
- [macOS](https://github.com/netdata/netdata/blob/master/src/collectors/macos.plugin/integrations/macos.md)
### Mail Servers
- [DMARC](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dmarc.md)
- [Dovecot](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/dovecot/integrations/dovecot.md)
- [Exim](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/exim/integrations/exim.md)
- [Halon](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/halon.md)
- [Maildir](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/maildir.md)
- [Postfix](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/postfix/integrations/postfix.md)
### Media Services
- [Discourse](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/discourse.md)
- [Icecast](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/icecast/integrations/icecast.md)
- [OBS Studio](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/obs_studio.md)
- [RetroShare](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/retroshare/integrations/retroshare.md)
- [SABnzbd](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sabnzbd.md)
- [Stream](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/stream.md)
- [Twitch](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/twitch.md)
- [Zulip](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/zulip.md)
### Message Brokers
- [ActiveMQ](https://github.com/netdata/go.d.plugin/blob/master/modules/activemq/integrations/activemq.md)
- [Apache Pulsar](https://github.com/netdata/go.d.plugin/blob/master/modules/pulsar/integrations/apache_pulsar.md)
- [Beanstalk](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/beanstalk/integrations/beanstalk.md)
- [IBM MQ](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ibm_mq.md)
- [Kafka Connect](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/kafka_connect.md)
- [Kafka ZooKeeper](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/kafka_zookeeper.md)
- [Kafka](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/kafka.md)
- [MQTT Blackbox](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mqtt_blackbox.md)
- [RabbitMQ](https://github.com/netdata/go.d.plugin/blob/master/modules/rabbitmq/integrations/rabbitmq.md)
- [Redis Queue](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/redis_queue.md)
- [VerneMQ](https://github.com/netdata/go.d.plugin/blob/master/modules/vernemq/integrations/vernemq.md)
- [XMPP Server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/xmpp_server.md)
- [mosquitto](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mosquitto.md)
### Networking Stack and Network Interfaces
- [8430FT modem](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/8430ft_modem.md)
- [A10 ACOS network devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/a10_acos_network_devices.md)
- [Andrews & Arnold line status](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/andrews_&_arnold_line_status.md)
- [Aruba devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/aruba_devices.md)
- [Bird Routing Daemon](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/bird_routing_daemon.md)
- [Checkpoint device](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/checkpoint_device.md)
- [Cisco ACI](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cisco_aci.md)
- [Citrix NetScaler](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/citrix_netscaler.md)
- [DDWRT Routers](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ddwrt_routers.md)
- [FRRouting](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/frrouting.md)
- [Fortigate firewall](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/fortigate_firewall.md)
- [Freifunk network](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/freifunk_network.md)
- [Fritzbox network devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/fritzbox_network_devices.md)
- [Hitron CGN series CPE](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hitron_cgn_series_cpe.md)
- [Hitron CODA Cable Modem](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hitron_coda_cable_modem.md)
- [Huawei devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/huawei_devices.md)
- [Keepalived](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/keepalived.md)
- [Meraki dashboard](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/meraki_dashboard.md)
- [MikroTik devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mikrotik_devices.md)
- [Mikrotik RouterOS devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mikrotik_routeros_devices.md)
- [NetFlow](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/netflow.md)
- [NetMeter](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/netmeter.md)
- [Open vSwitch](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/open_vswitch.md)
- [OpenROADM devices](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openroadm_devices.md)
- [RIPE Atlas](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ripe_atlas.md)
- [SONiC NOS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sonic_nos.md)
- [SmartRG 808AC Cable Modem](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/smartrg_808ac_cable_modem.md)
- [Starlink (SpaceX)](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/starlink_spacex.md)
- [Traceroute](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/traceroute.md)
- [Ubiquiti UFiber OLT](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ubiquiti_ufiber_olt.md)
- [Zyxel GS1200-8](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/zyxel_gs1200-8.md)
### Incident Management
- [OTRS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/otrs.md)
- [StatusPage](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/statuspage.md)
### Observability
- [Collectd](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/collectd.md)
- [Dynatrace](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dynatrace.md)
- [Grafana](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/grafana.md)
- [Hubble](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hubble.md)
- [Naemon](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/naemon.md)
- [Nagios](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/nagios.md)
- [New Relic](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/new_relic.md)
### Other
- [Example collector](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/example/integrations/example_collector.md)
- [GitHub API rate limit](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/github_api_rate_limit.md)
- [GitHub repository](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/github_repository.md)
- [Netdata Agent alarms](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/alarms/integrations/netdata_agent_alarms.md)
- [python.d changefinder](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/changefinder/integrations/python.d_changefinder.md)
@ -929,266 +375,62 @@ If you don't see the app/service you'd like to monitor in this list:
- [Applications](https://github.com/netdata/netdata/blob/master/src/collectors/apps.plugin/integrations/applications.md)
- [Supervisor](https://github.com/netdata/go.d.plugin/blob/master/modules/supervisord/integrations/supervisor.md)
- [User Groups](https://github.com/netdata/netdata/blob/master/src/collectors/apps.plugin/integrations/user_groups.md)
- [Users](https://github.com/netdata/netdata/blob/master/src/collectors/apps.plugin/integrations/users.md)
### Provisioning Systems
- [BOSH](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/bosh.md)
- [Cloud Foundry Firehose](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cloud_foundry_firehose.md)
- [Cloud Foundry](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cloud_foundry.md)
- [Spacelift](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/spacelift.md)
### Search Engines
- [Elasticsearch](https://github.com/netdata/go.d.plugin/blob/master/modules/elasticsearch/integrations/elasticsearch.md)
- [Meilisearch](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/meilisearch.md)
- [OpenSearch](https://github.com/netdata/go.d.plugin/blob/master/modules/elasticsearch/integrations/opensearch.md)
- [Solr](https://github.com/netdata/go.d.plugin/blob/master/modules/solr/integrations/solr.md)
- [Sphinx](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/sphinx.md)
### Security Systems
- [Certificate Transparency](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/certificate_transparency.md)
- [ClamAV daemon](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/clamav_daemon.md)
- [Clamscan results](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/clamscan_results.md)
- [Crowdsec](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/crowdsec.md)
- [Honeypot](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/honeypot.md)
- [Lynis audit reports](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/lynis_audit_reports.md)
- [OpenVAS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/openvas.md)
- [SSL Certificate](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ssl_certificate.md)
- [Suricata](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/suricata.md)
- [Vault PKI](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/vault_pki.md)
### Service Discovery / Registry
- [Consul](https://github.com/netdata/go.d.plugin/blob/master/modules/consul/integrations/consul.md)
- [Kafka Consumer Lag](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/kafka_consumer_lag.md)
- [ZooKeeper](https://github.com/netdata/go.d.plugin/blob/master/modules/zookeeper/integrations/zookeeper.md)
- [etcd](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/etcd.md)
### Storage, Mount Points and Filesystems
- [AdaptecRAID](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/adaptec_raid/integrations/adaptecraid.md)
- [Altaro Backup](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/altaro_backup.md)
- [Borg backup](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/borg_backup.md)
- [CVMFS clients](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cvmfs_clients.md)
- [Ceph](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/ceph/integrations/ceph.md)
- [Dell EMC Isilon cluster](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dell_emc_isilon_cluster.md)
- [Dell EMC ScaleIO](https://github.com/netdata/go.d.plugin/blob/master/modules/scaleio/integrations/dell_emc_scaleio.md)
- [Dell EMC XtremIO cluster](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dell_emc_xtremio_cluster.md)
- [Dell PowerMax](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/dell_powermax.md)
- [EOS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/eos.md)
- [Generic storage enclosure tool](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/generic_storage_enclosure_tool.md)
- [HDSentinel](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hdsentinel.md)
- [HP Smart Storage Arrays](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/hpssa/integrations/hp_smart_storage_arrays.md)
- [Hadoop Distributed File System (HDFS)](https://github.com/netdata/go.d.plugin/blob/master/modules/hdfs/integrations/hadoop_distributed_file_system_hdfs.md)
- [IBM Spectrum Virtualize](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ibm_spectrum_virtualize.md)
- [IBM Spectrum](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/ibm_spectrum.md)
- [IPFS](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/ipfs/integrations/ipfs.md)
- [Lagerist Disk latency](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/lagerist_disk_latency.md)
- [MegaCLI](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/megacli/integrations/megacli.md)
- [MogileFS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mogilefs.md)
- [NVMe devices](https://github.com/netdata/go.d.plugin/blob/master/modules/nvme/integrations/nvme_devices.md)
- [NetApp Solidfire](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/netapp_solidfire.md)
- [Netapp ONTAP API](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/netapp_ontap_api.md)
- [Samba](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/samba/integrations/samba.md)
- [Starwind VSAN VSphere Edition](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/starwind_vsan_vsphere_edition.md)
- [Storidge](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/storidge.md)
- [Synology ActiveBackup](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/synology_activebackup.md)
### Synthetic Checks
- [Blackbox](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/blackbox.md)
- [Domain expiration date](https://github.com/netdata/go.d.plugin/blob/master/modules/whoisquery/integrations/domain_expiration_date.md)
- [HTTP Endpoints](https://github.com/netdata/go.d.plugin/blob/master/modules/httpcheck/integrations/http_endpoints.md)
- [IOPing](https://github.com/netdata/netdata/blob/master/src/collectors/ioping.plugin/integrations/ioping.md)
- [Idle OS Jitter](https://github.com/netdata/netdata/blob/master/src/collectors/idlejitter.plugin/integrations/idle_os_jitter.md)
- [Monit](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/monit/integrations/monit.md)
- [Ping](https://github.com/netdata/go.d.plugin/blob/master/modules/ping/integrations/ping.md)
- [Pingdom](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/pingdom.md)
- [Site 24x7](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/site_24x7.md)
- [TCP Endpoints](https://github.com/netdata/go.d.plugin/blob/master/modules/portcheck/integrations/tcp_endpoints.md)
- [Uptimerobot](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/uptimerobot.md)
- [X.509 certificate](https://github.com/netdata/go.d.plugin/blob/master/modules/x509check/integrations/x.509_certificate.md)
### System Clock and NTP
- [Chrony](https://github.com/netdata/go.d.plugin/blob/master/modules/chrony/integrations/chrony.md)
- [NTPd](https://github.com/netdata/go.d.plugin/blob/master/modules/ntpd/integrations/ntpd.md)
- [Timex](https://github.com/netdata/netdata/blob/master/src/collectors/timex.plugin/integrations/timex.md)
### Systemd
- [Systemd Services](https://github.com/netdata/netdata/blob/master/src/collectors/cgroups.plugin/integrations/systemd_services.md)
- [Systemd Units](https://github.com/netdata/go.d.plugin/blob/master/modules/systemdunits/integrations/systemd_units.md)
- [systemd-logind users](https://github.com/netdata/go.d.plugin/blob/master/modules/logind/integrations/systemd-logind_users.md)
### Task Queues
- [Celery](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/celery.md)
- [Mesos](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/mesos.md)
- [Slurm](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/slurm.md)
### Telephony Servers
- [GTP](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/gtp.md)
- [Kannel](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/kannel.md)
- [OpenSIPS](https://github.com/netdata/netdata/blob/master/src/collectors/charts.d.plugin/opensips/integrations/opensips.md)
### UPS
- [APC UPS](https://github.com/netdata/netdata/blob/master/src/collectors/charts.d.plugin/apcupsd/integrations/apc_ups.md)
- [Eaton UPS](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/eaton_ups.md)
- [UPS (NUT)](https://github.com/netdata/go.d.plugin/blob/master/modules/upsd/integrations/ups_nut.md)
### VPNs
- [Fastd](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/fastd.md)
- [Libreswan](https://github.com/netdata/netdata/blob/master/src/collectors/charts.d.plugin/libreswan/integrations/libreswan.md)
- [OpenVPN status log](https://github.com/netdata/go.d.plugin/blob/master/modules/openvpn_status_log/integrations/openvpn_status_log.md)
- [OpenVPN](https://github.com/netdata/go.d.plugin/blob/master/modules/openvpn/integrations/openvpn.md)
- [SoftEther VPN Server](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/softether_vpn_server.md)
- [Speedify CLI](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/speedify_cli.md)
- [Tor](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/tor/integrations/tor.md)
- [WireGuard](https://github.com/netdata/go.d.plugin/blob/master/modules/wireguard/integrations/wireguard.md)
- [strongSwan](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/strongswan.md)
### Web Servers and Web Proxies
- [APIcast](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/apicast.md)
- [Apache](https://github.com/netdata/go.d.plugin/blob/master/modules/apache/integrations/apache.md)
- [Clash](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/clash.md)
- [Cloudflare PCAP](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/cloudflare_pcap.md)
- [Envoy](https://github.com/netdata/go.d.plugin/blob/master/modules/envoy/integrations/envoy.md)
- [Gobetween](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/gobetween.md)
- [HAProxy](https://github.com/netdata/go.d.plugin/blob/master/modules/haproxy/integrations/haproxy.md)
- [HHVM](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/integrations/hhvm.md)
- [HTTPD](https://github.com/netdata/go.d.plugin/blob/master/modules/apache/integrations/httpd.md)
- [Lighttpd](https://github.com/netdata/go.d.plugin/blob/master/modules/lighttpd/integrations/lighttpd.md)
- [Litespeed](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/litespeed/integrations/litespeed.md)
- [NGINX Plus](https://github.com/netdata/go.d.plugin/blob/master/modules/nginxplus/integrations/nginx_plus.md)
- [NGINX VTS](https://github.com/netdata/go.d.plugin/blob/master/modules/nginxvts/integrations/nginx_vts.md)
- [NGINX](https://github.com/netdata/go.d.plugin/blob/master/modules/nginx/integrations/nginx.md)
- [PHP-FPM](https://github.com/netdata/go.d.plugin/blob/master/modules/phpfpm/integrations/php-fpm.md)
- [Squid log files](https://github.com/netdata/go.d.plugin/blob/master/modules/squidlog/integrations/squid_log_files.md)
- [Squid](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/squid/integrations/squid.md)
- [Tengine](https://github.com/netdata/go.d.plugin/blob/master/modules/tengine/integrations/tengine.md)
- [Tomcat](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/tomcat/integrations/tomcat.md)
- [Traefik](https://github.com/netdata/go.d.plugin/blob/master/modules/traefik/integrations/traefik.md)
- [Varnish](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/varnish/integrations/varnish.md)
- [Web server log files](https://github.com/netdata/go.d.plugin/blob/master/modules/weblog/integrations/web_server_log_files.md)
- [uWSGI](https://github.com/netdata/netdata/blob/master/src/collectors/python.d.plugin/uwsgi/integrations/uwsgi.md)
### Windows Systems
- [Active Directory](https://github.com/netdata/go.d.plugin/blob/master/modules/windows/integrations/active_directory.md)
- [HyperV](https://github.com/netdata/go.d.plugin/blob/master/modules/windows/integrations/hyperv.md)
- [MS Exchange](https://github.com/netdata/go.d.plugin/blob/master/modules/windows/integrations/ms_exchange.md)
- [MS SQL Server](https://github.com/netdata/go.d.plugin/blob/master/modules/windows/integrations/ms_sql_server.md)
- [NET Framework](https://github.com/netdata/go.d.plugin/blob/master/modules/windows/integrations/net_framework.md)
- [Windows](https://github.com/netdata/go.d.plugin/blob/master/modules/windows/integrations/windows.md)

View File

@ -67,7 +67,7 @@ You can enable/disable of the collection modules supported by `go.d`, `python.d`
configuration file of that orchestrator. For example, you can change the behavior of the Go orchestrator, or any of its
collectors, by editing `go.d.conf`.
Use `edit-config` from your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory)
Use `edit-config` from your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory)
to open the orchestrator primary configuration file:
```bash
@ -105,7 +105,7 @@ and open its documentation. Some software has collectors written in multiple lan
pick the collector written in Go.
Use `edit-config` from your
[Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory)
[Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory)
to open a collector's configuration file. For example, edit the Nginx collector with the following:
```bash

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/apps.plugin/metadata.yaml"
sidebar_label: "Applications"
learn_status: "Published"
learn_rel_path: "Data Collection/Processes and System Services"
learn_rel_path: "Collecting Metrics/Processes and System Services"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/apps.plugin/metadata.yaml"
sidebar_label: "User Groups"
learn_status: "Published"
learn_rel_path: "Data Collection/Processes and System Services"
learn_rel_path: "Collecting Metrics/Processes and System Services"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/apps.plugin/metadata.yaml"
sidebar_label: "Users"
learn_status: "Published"
learn_rel_path: "Data Collection/Processes and System Services"
learn_rel_path: "Collecting Metrics/Processes and System Services"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
learn_rel_path: "Collecting Metrics/Containers and VMs"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Kubernetes Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Kubernetes"
learn_rel_path: "Collecting Metrics/Kubernetes"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Libvirt Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
learn_rel_path: "Collecting Metrics/Containers and VMs"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "LXC Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
learn_rel_path: "Collecting Metrics/Containers and VMs"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "oVirt Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
learn_rel_path: "Collecting Metrics/Containers and VMs"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Proxmox Containers"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
learn_rel_path: "Collecting Metrics/Containers and VMs"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Systemd Services"
learn_status: "Published"
learn_rel_path: "Data Collection/Systemd"
learn_rel_path: "Collecting Metrics/Systemd"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cgroups.plugin/metadata.yaml"
sidebar_label: "Virtual Machines"
learn_status: "Published"
learn_rel_path: "Data Collection/Containers and VMs"
learn_rel_path: "Collecting Metrics/Containers and VMs"
most_popular: True
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/charts.d.plugin/ap/metadata.yaml"
sidebar_label: "Access Points"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Network"
learn_rel_path: "Collecting Metrics/Linux Systems/Network"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -101,7 +101,7 @@ The configuration file name for this integration is `charts.d/ap.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/charts.d.plugin/apcupsd/metadata.yaml"
sidebar_label: "APC UPS"
learn_status: "Published"
learn_rel_path: "Data Collection/UPS"
learn_rel_path: "Collecting Metrics/UPS"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -119,7 +119,7 @@ The configuration file name for this integration is `charts.d/apcupsd.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/charts.d.plugin/libreswan/metadata.yaml"
sidebar_label: "Libreswan"
learn_status: "Published"
learn_rel_path: "Data Collection/VPNs"
learn_rel_path: "Collecting Metrics/VPNs"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -116,7 +116,7 @@ The configuration file name for this integration is `charts.d/libreswan.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/charts.d.plugin/opensips/metadata.yaml"
sidebar_label: "OpenSIPS"
learn_status: "Published"
learn_rel_path: "Data Collection/Telephony Servers"
learn_rel_path: "Collecting Metrics/Telephony Servers"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -112,7 +112,7 @@ The configuration file name for this integration is `charts.d/opensips.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/charts.d.plugin/sensors/metadata.yaml"
sidebar_label: "Linux Sensors (sysfs)"
learn_status: "Published"
learn_rel_path: "Data Collection/Hardware Devices and Sensors"
learn_rel_path: "Collecting Metrics/Hardware Devices and Sensors"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -95,7 +95,7 @@ If [using our official native DEB/RPM packages](https://github.com/netdata/netda
#### Enable the sensors collector
The `sensors` collector is disabled by default. To enable it, use `edit-config` from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `charts.d.conf` file.
The `sensors` collector is disabled by default. To enable it, use `edit-config` from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md), which is typically at `/etc/netdata`, to edit the `charts.d.conf` file.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different
@ -114,7 +114,7 @@ The configuration file name for this integration is `charts.d/sensors.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -50,7 +50,7 @@ modules:
If [using our official native DEB/RPM packages](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md#determine-which-installation-method-you-used), make sure `netdata-plugin-chartsd` is installed.
- title: "Enable the sensors collector"
description: |
The `sensors` collector is disabled by default. To enable it, use `edit-config` from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `charts.d.conf` file.
The `sensors` collector is disabled by default. To enable it, use `edit-config` from the Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md), which is typically at `/etc/netdata`, to edit the `charts.d.conf` file.
```bash
cd /etc/netdata # Replace this path with your Netdata config directory, if different

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/cups.plugin/metadata.yaml"
sidebar_label: "CUPS"
learn_status: "Published"
learn_rel_path: "Data Collection/Hardware Devices and Sensors"
learn_rel_path: "Collecting Metrics/Hardware Devices and Sensors"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -116,7 +116,7 @@ The file format is a modified INI syntax. The general structure is:
option3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/debugfs.plugin/metadata.yaml"
sidebar_label: "Linux ZSwap"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Memory"
learn_rel_path: "Collecting Metrics/Linux Systems/Memory"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -113,7 +113,7 @@ The file format is a modified INI syntax. The general structure is:
option3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/debugfs.plugin/metadata.yaml"
sidebar_label: "Power Capping"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Kernel"
learn_rel_path: "Collecting Metrics/Linux Systems/Kernel"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -107,7 +107,7 @@ The file format is a modified INI syntax. The general structure is:
option3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/debugfs.plugin/metadata.yaml"
sidebar_label: "System Memory Fragmentation"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems/Memory"
learn_rel_path: "Collecting Metrics/Linux Systems/Memory"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -111,7 +111,7 @@ The file format is a modified INI syntax. The general structure is:
option3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/diskspace.plugin/metadata.yaml"
sidebar_label: "Disk space"
learn_status: "Published"
learn_rel_path: "Data Collection/Linux Systems"
learn_rel_path: "Collecting Metrics/Linux Systems"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -109,7 +109,7 @@ The file format is a modified INI syntax. The general structure is:
option3 = some third value
```
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Cachestat"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -146,7 +146,7 @@ The configuration file name for this integration is `ebpf.d/cachestat.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF DCstat"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -144,7 +144,7 @@ The configuration file name for this integration is `ebpf.d/dcstat.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Disk"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -110,7 +110,7 @@ The configuration file name for this integration is `ebpf.d/disk.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Filedescriptor"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -144,7 +144,7 @@ The configuration file name for this integration is `ebpf.d/fd.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Filesystem"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -131,7 +131,7 @@ The configuration file name for this integration is `ebpf.d/filesystem.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Hardirq"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -110,7 +110,7 @@ The configuration file name for this integration is `ebpf.d/hardirq.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF MDflush"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -105,7 +105,7 @@ The configuration file name for this integration is `ebpf.d/mdflush.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Mount"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -111,7 +111,7 @@ The configuration file name for this integration is `ebpf.d/mount.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF OOMkill"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -127,7 +127,7 @@ The configuration file name for this integration is `ebpf.d/oomkill.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Process"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Processes"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -154,7 +154,7 @@ The configuration file name for this integration is `ebpf.d/process.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF SHM"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -148,7 +148,7 @@ The configuration file name for this integration is `ebpf.d/shm.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Socket"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -165,7 +165,7 @@ The configuration file name for this integration is `ebpf.d/network.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF SoftIRQ"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -110,7 +110,7 @@ The configuration file name for this integration is `ebpf.d/softirq.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF SWAP"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -137,7 +137,7 @@ The configuration file name for this integration is `ebpf.d/swap.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

View File

@ -3,7 +3,7 @@ custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/metadata.yaml"
sidebar_label: "eBPF Sync"
learn_status: "Published"
learn_rel_path: "Data Collection/eBPF"
learn_rel_path: "Collecting Metrics/eBPF"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->
@ -118,7 +118,7 @@ The configuration file name for this integration is `ebpf.d/sync.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata

Some files were not shown because too many files have changed in this diff Show More