diff --git a/README.md b/README.md index 99064f5069..a25438dbdb 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ It gives you the ability to automatically identify processes, collect and store [Netdata Cloud](https://www.netdata.cloud) is a hosted web interface that gives you **Free**, real-time visibility into your **Entire Infrastructure** with secure access to your Netdata Agents. It provides an ability to automatically route your requests to the most relevant agents to display your metrics, based on the stored metadata (Agents topology, what metrics are collected on specific Agents as well as the retention information for each metric). -It gives you some extra features, like [Metric Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations), [Anomaly Advisor](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor), [anomaly rates on every chart](https://blog.netdata.cloud/anomaly-rate-in-every-chart/) and much more. +It gives you some extra features, like [Metric Correlations](https://github.com/netdata/netdata/blob/master/docs/cloud/insights/metric-correlations.md), [Anomaly Advisor](https://github.com/netdata/netdata/blob/master/docs/cloud/insights/anomaly-advisor.mdx), [anomaly rates on every chart](https://blog.netdata.cloud/anomaly-rate-in-every-chart/) and much more. Try it for yourself right now by checking out the Netdata Cloud [demo space](https://app.netdata.cloud/spaces/netdata-demo/rooms/all-nodes/overview) (No sign up or login needed). @@ -77,7 +77,7 @@ Here's what you can expect from Netdata: synchronize charts as you pan through time, zoom in on anomalies, and more. - **Visual anomaly detection**: Our UI/UX emphasizes the relationships between charts to help you detect the root cause of anomalies. -- **Machine learning (ML) features out of the box**: Unsupervised ML-based [anomaly detection](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor), every second, every metric, zero-config! [Metric correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations) to help with short-term change detection. And other [additional](https://learn.netdata.cloud/guides/monitor/anomaly-detection) ML-based features to help make your life easier. +- **Machine learning (ML) features out of the box**: Unsupervised ML-based [anomaly detection](https://github.com/netdata/netdata/blob/master/docs/cloud/insights/anomaly-advisor.mdx), every second, every metric, zero-config! [Metric correlations](https://github.com/netdata/netdata/blob/master/docs/cloud/insights/metric-correlations.md) to help with short-term change detection. And other [additional](https://github.com/netdata/netdata/blob/master/docs/guides/monitor/anomaly-detection.md) ML-based features to help make your life easier. - **Scales to infinity**: You can install it on all your servers, containers, VMs, and IoT devices. Metrics are not centralized by default, so there is no limit. - **Several operating modes**: Autonomous host monitoring (the default), headless data collector, forwarding proxy, @@ -88,17 +88,17 @@ Netdata works with tons of applications, notifications platforms, and other time - **300+ system, container, and application endpoints**: Collectors autodetect metrics from default endpoints and immediately visualize them into meaningful charts designed for troubleshooting. See [everything we - support](https://learn.netdata.cloud/docs/agent/collectors/collectors). + support](https://github.com/netdata/netdata/blob/master/collectors/COLLECTORS.md). - **20+ notification platforms**: Netdata's health watchdog sends warning and critical alarms to your [favorite - platform](https://learn.netdata.cloud/docs/monitor/enable-notifications) to inform you of anomalies just seconds + platform](https://github.com/netdata/netdata/blob/master/docs/monitor/enable-notifications.md) to inform you of anomalies just seconds after they affect your node. - **30+ external time-series databases**: Export resampled metrics as they're collected to other [local- and - Cloud-based databases](https://learn.netdata.cloud/docs/export/external-databases) for best-in-class + Cloud-based databases](https://github.com/netdata/netdata/blob/master/docs/export/external-databases.md) for best-in-class interoperability. > 💡 **Want to leverage the monitoring power of Netdata across entire infrastructure**? View metrics from > any number of distributed nodes in a single interface and unlock even more -> [features](https://learn.netdata.cloud/docs/overview/why-netdata) with [Netdata +> [features](https://github.com/netdata/netdata/blob/master/docs/overview/why-netdata.md) with [Netdata > Cloud](https://learn.netdata.cloud/docs/overview/what-is-netdata#netdata-cloud). ## Get Netdata @@ -117,7 +117,7 @@ Netdata works with tons of applications, notifications platforms, and other time ### Infrastructure view -Due to the distributed nature of the Netdata ecosystem, it is recommended to setup not only one Netdata Agent on your production system, but also an additional Netdata Agent acting as a [Parent](https://learn.netdata.cloud/docs/agent/streaming). A local Netdata Agent (child), without any database or alarms, collects metrics and sends them to another Netdata Agent (parent). The same parent can collect data for any number of child nodes and serves as a centralized health check engine for each child by triggering alerts on their behalf. +Due to the distributed nature of the Netdata ecosystem, it is recommended to setup not only one Netdata Agent on your production system, but also an additional Netdata Agent acting as a [Parent](https://github.com/netdata/netdata/blob/master/streaming/README.md). A local Netdata Agent (child), without any database or alarms, collects metrics and sends them to another Netdata Agent (parent). The same parent can collect data for any number of child nodes and serves as a centralized health check engine for each child by triggering alerts on their behalf. ![Netdata Cloud](https://user-images.githubusercontent.com/423236/205926887-43024984-6d38-46ad-96cb-d0c388117c6d.png) @@ -127,7 +127,7 @@ Community version is free to use forever. No restriction on number of nodes, clu #### Claiming existing Agents -You can easily [connect (claim)](https://learn.netdata.cloud/docs/agent/claim) your existing Agents to the Cloud to unlock features for free and to find weaknesses before they turn into outages. +You can easily [connect (claim)](https://github.com/netdata/netdata/blob/master/claim/README.md) your existing Agents to the Cloud to unlock features for free and to find weaknesses before they turn into outages. ### Single Node view @@ -138,7 +138,7 @@ installation script](https://learn.netdata.cloud/docs/agent/packaging/installer/ and builds all dependencies, including those required to connect to [Netdata Cloud](https://netdata.cloud/cloud) if you choose, and enables [automatic nightly updates](https://learn.netdata.cloud/docs/agent/packaging/installer#nightly-vs-stable-releases) and [anonymous -statistics](https://learn.netdata.cloud/docs/agent/anonymous-statistics). +statistics](https://github.com/netdata/netdata/blob/master/docs/anonymous-statistics.md). ```bash wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh @@ -149,7 +149,7 @@ To view the Netdata dashboard, navigate to `http://localhost:19999`, or `http:// ### Docker You can also try out Netdata's capabilities in a [Docker -container](https://learn.netdata.cloud/docs/agent/packaging/docker/): +container](https://github.com/netdata/netdata/blob/master/packaging/docker/README.md): ```bash docker run -d --name=netdata \ @@ -173,16 +173,16 @@ To view the Netdata dashboard, navigate to `http://localhost:19999`, or `http:// ### Other operating systems See our documentation for [additional operating -systems](/packaging/installer/README.md#have-a-different-operating-system-or-want-to-try-another-method), including -[Kubernetes](/packaging/installer/methods/kubernetes.md), [`.deb`/`.rpm` -packages](/packaging/installer/methods/kickstart.md#native-packages), and more. +systems](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#have-a-different-operating-system-or-want-to-try-another-method), including +[Kubernetes](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kubernetes.md), [`.deb`/`.rpm` +packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md#native-packages), and more. ### Post-installation -When you're finished with installation, check out our [single-node](/docs/quickstart/single-node.md) or -[infrastructure](/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. +When you're finished with installation, check out our [single-node](https://github.com/netdata/netdata/blob/master/docs/quickstart/single-node.md) or +[infrastructure](https://github.com/netdata/netdata/blob/master/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. -Or, skip straight to [configuring the Netdata Agent](/docs/configure/nodes.md). +Or, skip straight to [configuring the Netdata Agent](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md). Read through Netdata's [documentation](https://learn.netdata.cloud/docs), which is structured based on actions and solutions, to enable features like health monitoring, alarm notifications, long-term metrics storage, exporting to @@ -215,7 +215,7 @@ to collect metrics, troubleshoot via charts, export to external databases, and m ## Community -Netdata is an inclusive open-source project and community. Please read our [Code of Conduct](https://learn.netdata.cloud/contribute/code-of-conduct). +Netdata is an inclusive open-source project and community. Please read our [Code of Conduct](https://github.com/netdata/.github/blob/main/CODE_OF_CONDUCT.md). Find most of the Netdata team in our [community forums](https://community.netdata.cloud). It's the best place to ask questions, find resources, and engage with passionate professionals. The team is also available and active in our [Discord](https://discord.com/invite/mPZ6WZKKG2) too. @@ -235,18 +235,18 @@ You can also find Netdata on: Contributions are the lifeblood of open-source projects. While we continue to invest in and improve Netdata, we need help to democratize monitoring! -- Read our [Contributing Guide](https://learn.netdata.cloud/contribute/handbook), which contains all the information you need to contribute to Netdata, such as improving our documentation, engaging in the community, and developing new features. We've made it as frictionless as possible, but if you need help, just ping us on our community forums! +- Read our [Contributing Guide](https://github.com/netdata/.github/blob/main/CONTRIBUTING.md), which contains all the information you need to contribute to Netdata, such as improving our documentation, engaging in the community, and developing new features. We've made it as frictionless as possible, but if you need help, just ping us on our community forums! - We have a whole category dedicated to contributing and extending Netdata on our [community forums](https://community.netdata.cloud/c/agent-development/9) - Found a bug? Open a [GitHub issue](https://github.com/netdata/netdata/issues/new?assignees=&labels=bug%2Cneeds+triage&template=BUG_REPORT.yml&title=%5BBug%5D%3A+). - View our [Security Policy](https://github.com/netdata/netdata/security/policy). -Package maintainers should read the guide on [building Netdata from source](/packaging/installer/methods/source.md) for +Package maintainers should read the guide on [building Netdata from source](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/source.md) for instructions on building each Netdata component from source and preparing a package. ## License -The Netdata Agent is [GPLv3+](/LICENSE). Netdata re-distributes other open-source tools and libraries. Please check the -[third party licenses](/REDISTRIBUTED.md). +The Netdata Agent is [GPLv3+](https://github.com/netdata/netdata/blob/master/LICENSE). Netdata re-distributes other open-source tools and libraries. Please check the +[third party licenses](https://github.com/netdata/netdata/blob/master/REDISTRIBUTED.md). ## Is it any good? diff --git a/aclk/README.md b/aclk/README.md index 9d6615ffe7..5b338dc2eb 100644 --- a/aclk/README.md +++ b/aclk/README.md @@ -29,8 +29,8 @@ this is not an option in your case always verify the current domain resolution ( ::: For a guide to connecting a node using the ACLK, plus additional troubleshooting and reference information, read our [get -started with Cloud](https://learn.netdata.cloud/docs/cloud/get-started) guide or the full [connect to Cloud -documentation](/claim/README.md). +started with Cloud](https://github.com/netdata/netdata/blob/master/docs/cloud/get-started.mdx) guide or the full [connect to Cloud +documentation](https://github.com/netdata/netdata/blob/master/claim/README.md). ## Data privacy [Data privacy](https://netdata.cloud/privacy/) is very important to us. We firmly believe that your data belongs to @@ -41,7 +41,7 @@ The data passes through our systems, but it isn't stored. However, to be able to offer the stunning visualizations and advanced functionality of Netdata Cloud, it does store a limited number of _metadata_. -Read more about [Data privacy in the Netdata Cloud](https://learn.netdata.cloud/docs/cloud/data-privacy) in the documentation. +Read more about [Data privacy in the Netdata Cloud](https://github.com/netdata/netdata/blob/master/docs/cloud/data-privacy.mdx) in the documentation. ## Enable and configure the ACLK @@ -57,7 +57,7 @@ configuration uses two settings: ``` If your Agent needs to use a proxy to access the internet, you must [set up a proxy for -connecting to cloud](/claim/README.md#connect-through-a-proxy). +connecting to cloud](https://github.com/netdata/netdata/blob/master/claim/README.md#connect-through-a-proxy). You can configure following keys in the `netdata.conf` section `[cloud]`: ``` @@ -76,8 +76,8 @@ You have two options if you prefer to disable the ACLK and not use Netdata Cloud ### Disable at installation You can pass the `--disable-cloud` parameter to the Agent installation when using a kickstart script -([kickstart.sh](/packaging/installer/methods/kickstart.md), or a [manual installation from -Git](/packaging/installer/methods/manual.md). +([kickstart.sh](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md), or a [manual installation from +Git](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md). When you pass this parameter, the installer does not download or compile any extra libraries. Once running, the Agent kills the thread responsible for the ACLK and connecting behavior, and behaves as though the ACLK, and thus Netdata Cloud, @@ -131,12 +131,12 @@ Restart your Agent to disable the ACLK. ### Re-enable the ACLK If you first disable the ACLK and any Cloud functionality and then decide you would like to use Cloud, you must either -[reinstall Netdata](/packaging/installer/REINSTALL.md) with Cloud enabled or change the runtime setting in your +[reinstall Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md) with Cloud enabled or change the runtime setting in your `cloud.conf` file. If you passed `--disable-cloud` to `netdata-installer.sh` during installation, you must -[reinstall](/packaging/installer/REINSTALL.md) your Agent. Use the same method as before, but pass `--require-cloud` to -the installer. When installation finishes you can [connect your node](/claim/README.md#how-to-connect-a-node). +[reinstall](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md) your Agent. Use the same method as before, but pass `--require-cloud` to +the installer. When installation finishes you can [connect your node](https://github.com/netdata/netdata/blob/master/claim/README.md#how-to-connect-a-node). If you changed the runtime setting in your `var/lib/netdata/cloud.d/cloud.conf` file, edit the file again and change `enabled` to `yes`: @@ -146,6 +146,6 @@ If you changed the runtime setting in your `var/lib/netdata/cloud.d/cloud.conf` enabled = yes ``` -Restart your Agent and [connect your node](/claim/README.md#how-to-connect-a-node). +Restart your Agent and [connect your node](https://github.com/netdata/netdata/blob/master/claim/README.md#how-to-connect-a-node). diff --git a/claim/README.md b/claim/README.md index 26d73ec382..f1d893eb23 100644 --- a/claim/README.md +++ b/claim/README.md @@ -12,10 +12,10 @@ learn_rel_path: "Setup" You can securely connect a Netdata Agent, running on a distributed node, to Netdata Cloud. A Space's administrator creates a **claiming token**, which is used to add an Agent to their Space via the [Agent-Cloud link -(ACLK)](/aclk/README.md). +(ACLK)](https://github.com/netdata/netdata/blob/master/aclk/README.md). Are you just starting out with Netdata Cloud? See our [get started with -Cloud](https://learn.netdata.cloud/docs/cloud/get-started) guide for a walkthrough of the process and simplified +Cloud](https://github.com/netdata/netdata/blob/master/docs/cloud/cloud.mdx) guide for a walkthrough of the process and simplified instructions. When connecting an agent (also referred to as a node) to Netdata Cloud, you must complete a verification process that proves you have some level of authorization to manage the node itself. This verification is a security feature that helps prevent unauthorized users from seeing the data on your node. @@ -26,13 +26,13 @@ Netdata Cloud. > The connection process ensures no third party can add your node, and then view your node's metrics, in a Cloud account, > Space, or War Room that you did not authorize. -By connecting a node, you opt-in to sending data from your Agent to Netdata Cloud via the [ACLK](/aclk/README.md). This +By connecting a node, you opt-in to sending data from your Agent to Netdata Cloud via the [ACLK](https://github.com/netdata/netdata/blob/master/aclk/README.md). This data is encrypted by TLS while it is in transit. We use the RSA keypair created during the connection process to authenticate the identity of the Netdata Agent when it connects to the Cloud. While the data does flow through Netdata Cloud servers on its way from Agents to the browser, we do not store or log it. You can connect a node during the Netdata Cloud onboarding process, or after you created a Space by clicking on **Connect -Nodes** in the [Spaces management area](https://learn.netdata.cloud/docs/cloud/spaces#manage-spaces). +Nodes** in the [Spaces management area](https://github.com/netdata/netdata/blob/master/docs/cloud/cloud.mdx#manage-spaces). There are two important notes regarding connecting nodes: @@ -46,7 +46,7 @@ There will be three main flows from where you might want to connect a node to Ne * when you are on an [ War Room](#empty-war-room) and you want to connect your first node * when you are at the [Manage Space](#manage-space-or-war-room) area and you select **Connect Nodes** to connect a node, coming from Manage Space or Manage War Room -* when you are on the [Nodes view page](https://learn.netdata.cloud/docs/cloud/visualize/nodes) and want to connect a node - this process falls into the [Manage Space](#manage-space-or-war-room) flow +* when you are on the [Nodes view page](https://github.com/netdata/netdata/blob/master/docs/cloud/visualize/nodes.md) and want to connect a node - this process falls into the [Manage Space](#manage-space-or-war-room) flow Please note that only the administrators of a Space in Netdata Cloud can view the claiming token and accompanying script, generated by Netdata Cloud, to trigger the connection process. @@ -70,11 +70,11 @@ finished onboarding. To connect a node, select which War Rooms you want to add this node to with the dropdown, then copy and paste the script given by Netdata Cloud into your node's terminal. -When coming from [Nodes view page](https://learn.netdata.cloud/docs/cloud/visualize/nodes) the room parameter is already defined to current War Room. +When coming from [Nodes view page](https://github.com/netdata/netdata/blob/master/docs/cloud/visualize/nodes.md) the room parameter is already defined to current War Room. ### Connect an agent running in Linux -If you want to connect a node that is running on a Linux environment, the script that will be provided to you by Netdata Cloud is the [kickstart](/packaging/installer/README.md#automatic-one-line-installation-script) which will install the Netdata Agent on your node, if it isn't already installed, and connect the node to Netdata Cloud. It should be similar to: +If you want to connect a node that is running on a Linux environment, the script that will be provided to you by Netdata Cloud is the [kickstart](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#automatic-one-line-installation-script) which will install the Netdata Agent on your node, if it isn't already installed, and connect the node to Netdata Cloud. It should be similar to: ``` wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh --claim-token TOKEN --claim-rooms ROOM1,ROOM2 --claim-url https://api.netdata.cloud @@ -84,7 +84,7 @@ the node in your Space after 60 seconds, see the [troubleshooting information](# Please note that to run it you will either need to have root privileges or run it with the user that is running the agent, more details on the [Connect an agent without root privileges](#connect-an-agent-without-root-privileges) section. -For more details on what are the extra parameters `claim-token`, `claim-rooms` and `claim-url` please refer to [Connect node to Netdata Cloud during installation](/packaging/installer/methods/kickstart.md#connect-node-to-netdata-cloud-during-installation). +For more details on what are the extra parameters `claim-token`, `claim-rooms` and `claim-url` please refer to [Connect node to Netdata Cloud during installation](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md#connect-node-to-netdata-cloud-during-installation). ### Connect an agent without root privileges @@ -118,7 +118,7 @@ connected on startup or restart. For the connection process to work, the contents of `/var/lib/netdata` _must_ be preserved across container restarts using a persistent volume. See our [recommended `docker run` and Docker Compose -examples](/packaging/docker/README.md#create-a-new-netdata-agent-container) for details. +examples](https://github.com/netdata/netdata/blob/master/packaging/docker/README.md#create-a-new-netdata-agent-container) for details. #### Known issues on older hosts with seccomp enabled @@ -289,7 +289,7 @@ you don't see the node in your Space after 60 seconds, see the [troubleshooting ### Connect an agent running in macOS -To connect a node that is running on a macOS environment the script that will be provided to you by Netdata Cloud is the [kickstart](/packaging/installer/methods/macos.md#install-netdata-with-our-automatic-one-line-installation-script) which will install the Netdata Agent on your node, if it isn't already installed, and connect the node to Netdata Cloud. It should be similar to: +To connect a node that is running on a macOS environment the script that will be provided to you by Netdata Cloud is the [kickstart](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/macos.md#install-netdata-with-our-automatic-one-line-installation-script) which will install the Netdata Agent on your node, if it isn't already installed, and connect the node to Netdata Cloud. It should be similar to: ```bash curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh --install-prefix /usr/local/ --claim-token TOKEN --claim-rooms ROOM1,ROOM2 --claim-url https://api.netdata.cloud @@ -299,7 +299,7 @@ the node in your Space after 60 seconds, see the [troubleshooting information](# ### Connect a Kubernetes cluster's parent Netdata pod -Read our [Kubernetes installation](/packaging/installer/methods/kubernetes.md#connect-your-kubernetes-cluster-to-netdata-cloud) +Read our [Kubernetes installation](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kubernetes.md#connect-your-kubernetes-cluster-to-netdata-cloud) for details on connecting a parent Netdata pod. ### Connect through a proxy @@ -328,7 +328,7 @@ For example, a HTTP proxy setting may look like the following: proxy = http://proxy.example.com:1080 # With a URL ``` -You can now move on to connecting. When you connect with the [kickstart](/packaging/installer/README.md#automatic-one-line-installation-script) script, add the `--claim-proxy=` parameter and +You can now move on to connecting. When you connect with the [kickstart](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#automatic-one-line-installation-script) script, add the `--claim-proxy=` parameter and append the same proxy setting you added to `netdata.conf`. ```bash @@ -340,7 +340,7 @@ you don't see the node in your Space after 60 seconds, see the [troubleshooting ### Troubleshooting -If you're having trouble connecting a node, this may be because the [ACLK](/aclk/README.md) cannot connect to Cloud. +If you're having trouble connecting a node, this may be because the [ACLK](https://github.com/netdata/netdata/blob/master/aclk/README.md) cannot connect to Cloud. With the Netdata Agent running, visit `http://NODE:19999/api/v1/info` in your browser, replacing `NODE` with the IP address or hostname of your Agent. The returned JSON contains four keys that will be helpful to diagnose any issues you @@ -373,7 +373,7 @@ If you run the kickstart script and get the following error `Existing install ap If you are using an unsupported package, such as a third-party `.deb`/`.rpm` package provided by your distribution, please remove that package and reinstall using our [recommended kickstart -script](/docs/get-started.mdx#install-on-linux-with-one-line-installer). +script](https://github.com/netdata/netdata/blob/master/docs/get-started.mdx#install-on-linux-with-one-line-installer). #### kickstart: Failed to write new machine GUID @@ -393,7 +393,7 @@ if you installed Netdata to `/opt/netdata`, use `/opt/netdata/bin/netdata-claim. If you are using an unsupported package, such as a third-party `.deb`/`.rpm` package provided by your distribution, please remove that package and reinstall using our [recommended kickstart -script](/docs/get-started.mdx#install-on-linux-with-one-line-installer). +script](https://github.com/netdata/netdata/blob/master/docs/get-started.mdx#install-on-linux-with-one-line-installer). #### Connecting on older distributions (Ubuntu 14.04, Debian 8, CentOS 6) @@ -402,7 +402,7 @@ If you're running an older Linux distribution or one that has reached EOL, such versions of OpenSSL cannot perform [hostname validation](https://wiki.openssl.org/index.php/Hostname_validation), which helps securely encrypt SSL connections. -We recommend you reinstall Netdata with a [static build](/packaging/installer/methods/kickstart.md#static-builds), which uses an +We recommend you reinstall Netdata with a [static build](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md#static-builds), which uses an up-to-date version of OpenSSL with hostname validation enabled. If you choose to continue using the outdated version of OpenSSL, your node will still connect to Netdata Cloud, albeit @@ -420,7 +420,7 @@ Additionally, check that the `enabled` setting in `var/lib/netdata/cloud.d/cloud enabled = true ``` -To fix this issue, reinstall Netdata using your [preferred method](/packaging/installer/README.md) and do not add the +To fix this issue, reinstall Netdata using your [preferred method](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md) and do not add the `--disable-cloud` option. #### cloud-available is false / ACLK Available: No @@ -510,20 +510,20 @@ tool, and details about the files found in `cloud.d`. ### The `cloud.conf` file -This section defines how and whether your Agent connects to [Netdata Cloud](https://learn.netdata.cloud/docs/cloud/) -using the [ACLK](/aclk/README.md). +This section defines how and whether your Agent connects to [Netdata Cloud](https://github.com/netdata/netdata/blob/master/docs/cloud/cloud.mdx) +using the [ACLK](https://github.com/netdata/netdata/blob/master/aclk/README.md). | setting | default | info | |:-------------- |:------------------------- |:-------------------------------------------------------------------------------------------------------------------------------------- | | cloud base url | https://api.netdata.cloud | The URL for the Netdata Cloud web application. You should not change this. If you want to disable Cloud, change the `enabled` setting. | -| enabled | yes | The runtime option to disable the [Agent-Cloud link](/aclk/README.md) and prevent your Agent from connecting to Netdata Cloud. | +| enabled | yes | The runtime option to disable the [Agent-Cloud link](https://github.com/netdata/netdata/blob/master/aclk/README.md) and prevent your Agent from connecting to Netdata Cloud. | ### kickstart script -The best way to install Netdata and connect your nodes to Netdata Cloud is with our automatic one-line installation script, [kickstart](/packaging/installer/README.md#automatic-one-line-installation-script). This script will install the Netdata Agent, in case it isn't already installed, and connect your node to Netdata Cloud. +The best way to install Netdata and connect your nodes to Netdata Cloud is with our automatic one-line installation script, [kickstart](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#automatic-one-line-installation-script). This script will install the Netdata Agent, in case it isn't already installed, and connect your node to Netdata Cloud. This works with: -* most Linux distributions, see [Netdata's platform support policy](/packaging/PLATFORM_SUPPORT.md) +* most Linux distributions, see [Netdata's platform support policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md) * macOS For details on how to run this script please check [How to connect a node](#how-to-connect-a-node) and choose your environment. @@ -578,7 +578,7 @@ netdatacli reload-claiming-state This reloads the Agent connection state from disk. -Our recommendation is to trigger the connection process using the [kickstart](/packaging/installer/README.md#automatic-one-line-installation-script) whenever possible. +Our recommendation is to trigger the connection process using the [kickstart](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#automatic-one-line-installation-script) whenever possible. ### Netdata Agent command line diff --git a/cli/README.md b/cli/README.md index a4471e2bc7..09f2017459 100644 --- a/cli/README.md +++ b/cli/README.md @@ -39,6 +39,6 @@ aclk-state [json] Returns current state of ACLK and Cloud connection. (optionally in json) ``` -Those commands are the same that can be sent to netdata via [signals](/daemon/README.md#command-line-options). +Those commands are the same that can be sent to netdata via [signals](https://github.com/netdata/netdata/blob/master/daemon/README.md#command-line-options). diff --git a/collectors/COLLECTORS.md b/collectors/COLLECTORS.md index db6a4828d3..a61a32dd56 100644 --- a/collectors/COLLECTORS.md +++ b/collectors/COLLECTORS.md @@ -14,16 +14,19 @@ Netdata uses collectors to help you gather metrics from your favorite applicatio real-time, interactive charts. The following list includes collectors for both external services/applications and internal system metrics. -Learn more about [how collectors work](/docs/collect/how-collectors-work.md), and then learn how to [enable or -configure](/docs/collect/enable-configure.md) any of the below collectors using the same process. +Learn more +about [how collectors work](https://github.com/netdata/netdata/blob/master/docs/collect/how-collectors-work.md), and +then learn how to [enable or +configure](https://github.com/netdata/netdata/blob/master/docs/collect/enable-configure.md) any of the below collectors using the same process. Some collectors have both Go and Python versions as we continue our effort to migrate all collectors to Go. In these cases, _Netdata always prioritizes the Go version_, and we highly recommend you use the Go versions for the best experience. -If you want to use a Python version of a collector, you need to explicitly [disable the Go -version](/docs/collect/enable-configure.md), and enable the Python version. Netdata then skips the Go version and -attempts to load the Python version and its accompanying configuration file. +If you want to use a Python version of a collector, you need to +explicitly [disable the Go version](https://github.com/netdata/netdata/blob/masterhttps://github.com/netdata/netdata/blob/master/docs/collect/enable-configure.md), +and enable the Python version. Netdata then skips the Go version and attempts to load the Python version and its +accompanying configuration file. If you don't see the app/service you'd like to monitor in this list: @@ -33,7 +36,7 @@ If you don't see the app/service you'd like to monitor in this list: a [feature request](https://github.com/netdata/netdata/issues/new/choose) on GitHub. - If you have basic software development skills, you can add your own plugin in [Go](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin#how-to-develop-a-collector) - or [Python](https://learn.netdata.cloud/guides/python-collector) + or [Python](https://github.com/netdata/netdata/blob/master/docs/guides/python-collector.md) Supported Collectors List: @@ -76,256 +79,300 @@ configure any of these collectors according to your setup and infrastructure. ### Generic -- [Prometheus endpoints](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/prometheus): Gathers +- [Prometheus endpoints](https://github.com/netdata/go.d.plugin/blob/master/modules/prometheus/README.md): Gathers metrics from any number of Prometheus endpoints, with support to autodetect more than 600 services and applications. -- [Pandas](https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/pandas): A Python collector that gathers - metrics from a [pandas](https://pandas.pydata.org/) dataframe. Pandas is a high level data processing library in - Python that can read various formats of data from local files or web endpoints. Custom processing and transformation +- [Pandas](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/pandas/README.md): A Python + collector that gathers + metrics from a [pandas](https://pandas.pydata.org/) dataframe. Pandas is a high level data processing library in + Python that can read various formats of data from local files or web endpoints. Custom processing and transformation logic can also be expressed as part of the collector configuration. ### APM (application performance monitoring) -- [Go applications](/collectors/python.d.plugin/go_expvar/README.md): Monitor any Go application that exposes its +- [Go applications](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/go_expvar/README.md): + Monitor any Go application that exposes its metrics with the `expvar` package from the Go standard library. -- [Java Spring Boot 2 - applications](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/springboot2/): +- [Java Spring Boot 2 applications](https://github.com/netdata/go.d.plugin/blob/master/modules/springboot2/README.md): Monitor running Java Spring Boot 2 applications that expose their metrics with the use of the Spring Boot Actuator. -- [statsd](/collectors/statsd.plugin/README.md): Implement a high performance `statsd` server for Netdata. -- [phpDaemon](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/phpdaemon/): Collect worker +- [statsd](https://github.com/netdata/netdata/blob/master/collectors/statsd.plugin/README.md): Implement a high + performance `statsd` server for Netdata. +- [phpDaemon](https://github.com/netdata/go.d.plugin/blob/master/modules/phpdaemon/README.md): Collect worker statistics (total, active, idle), and uptime for web and network applications. -- [uWSGI](/collectors/python.d.plugin/uwsgi/README.md): Monitor performance metrics exposed by the uWSGI Stats +- [uWSGI](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/uwsgi/README.md): Monitor + performance metrics exposed by the uWSGI Stats Server. ### Containers and VMs -- [Docker containers](/collectors/cgroups.plugin/README.md): Monitor the health and performance of individual Docker - containers using the cgroups collector plugin. -- [DockerD](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/docker/): Collect container health statistics. -- [Docker Engine](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/docker_engine/): Collect +- [Docker containers](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md): Monitor the + health and performance of individual Docker containers using the cgroups collector plugin. +- [DockerD](https://github.com/netdata/go.d.plugin/blob/master/modules/docker/README.md): Collect container health + statistics. +- [Docker Engine](https://github.com/netdata/go.d.plugin/blob/master/modules/docker_engine/README.md): Collect runtime statistics from the `docker` daemon using the `metrics-address` feature. -- [Docker Hub](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/dockerhub/): Collect statistics +- [Docker Hub](https://github.com/netdata/go.d.plugin/blob/master/modules/dockerhub/README.md): Collect statistics about Docker repositories, such as pulls, starts, status, time since last update, and more. -- [Libvirt](/collectors/cgroups.plugin/README.md): Monitor the health and performance of individual Libvirt containers +- [Libvirt](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md): Monitor the health and + performance of individual Libvirt containers using the cgroups collector plugin. -- [LXC](/collectors/cgroups.plugin/README.md): Monitor the health and performance of individual LXC containers using +- [LXC](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md): Monitor the health and + performance of individual LXC containers using the cgroups collector plugin. -- [LXD](/collectors/cgroups.plugin/README.md): Monitor the health and performance of individual LXD containers using +- [LXD](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md): Monitor the health and + performance of individual LXD containers using the cgroups collector plugin. -- [systemd-nspawn](/collectors/cgroups.plugin/README.md): Monitor the health and performance of individual +- [systemd-nspawn](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md): Monitor the + health and performance of individual systemd-nspawn containers using the cgroups collector plugin. -- [vCenter Server Appliance](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vcsa/): Monitor +- [vCenter Server Appliance](https://github.com/netdata/go.d.plugin/blob/master/modules/vcsa/README.md): Monitor appliance system, components, and software update health statuses via the Health API. -- [vSphere](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vsphere/): Collect host and virtual +- [vSphere](https://github.com/netdata/go.d.plugin/blob/master/modules/vsphere/README.md): Collect host and virtual machine performance metrics. -- [Xen/XCP-ng](/collectors/xenstat.plugin/README.md): Collect XenServer and XCP-ng metrics using `libxenstat`. +- [Xen/XCP-ng](https://github.com/netdata/netdata/blob/master/collectors/xenstat.plugin/README.md): Collect XenServer + and XCP-ng metrics using `libxenstat`. ### Data stores -- [CockroachDB](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/cockroachdb/): Monitor various +- [CockroachDB](https://github.com/netdata/go.d.plugin/blob/master/modules/cockroachdb/README.md): Monitor various database components using `_status/vars` endpoint. -- [Consul](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/consul/): Capture service and unbound +- [Consul](https://github.com/netdata/go.d.plugin/blob/master/modules/consul/README.md): Capture service and unbound checks status (passing, warning, critical, maintenance). -- [Couchbase](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/couchbase/): Gather per-bucket +- [Couchbase](https://github.com/netdata/go.d.plugin/blob/master/modules/couchbase/README.md): Gather per-bucket metrics from any number of instances of the distributed JSON document database. -- [CouchDB](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/couchdb): Monitor database health and +- [CouchDB](https://github.com/netdata/go.d.plugin/blob/master/modules/couchdb/README.md): Monitor database health and performance metrics (reads/writes, HTTP traffic, replication status, etc). -- [MongoDB](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mongodb): Collect server, database, +- [MongoDB](https://github.com/netdata/go.d.plugin/blob/master/modules/mongodb/README.md): Collect server, database, replication and sharding performance and health metrics. -- [MySQL](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mysql/): Collect database global, +- [MySQL](https://github.com/netdata/go.d.plugin/blob/master/modules/mysql/README.md): Collect database global, replication and per user statistics. -- [OracleDB](/collectors/python.d.plugin/oracledb/README.md): Monitor database performance and health metrics. -- [Pika](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/pika/): Gather metric, such as clients, +- [OracleDB](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/oracledb/README.md): Monitor + database performance and health metrics. +- [Pika](https://github.com/netdata/go.d.plugin/blob/master/modules/pika/README.md): Gather metric, such as clients, memory usage, queries, and more from the Redis interface-compatible database. -- [Postgres](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/postgres): Collect database health +- [Postgres](https://github.com/netdata/go.d.plugin/blob/master/modules/postgres/README.md): Collect database health and performance metrics. -- [ProxySQL](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/proxysql): Monitor database backend +- [ProxySQL](https://github.com/netdata/go.d.plugin/blob/master/modules/proxysql/README.md): Monitor database backend and frontend performance metrics. -- [Redis](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/redis/): Monitor status from any +- [Redis](https://github.com/netdata/go.d.plugin/blob/master/modules/redis/README.md): Monitor status from any number of database instances by reading the server's response to the `INFO ALL` command. -- [RethinkDB](/collectors/python.d.plugin/rethinkdbs/README.md): Collect database server and cluster statistics. -- [Riak KV](/collectors/python.d.plugin/riakkv/README.md): Collect database stats from the `/stats` endpoint. -- [Zookeeper](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/zookeeper/): Monitor application +- [RethinkDB](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/rethinkdbs/README.md): Collect + database server and cluster statistics. +- [Riak KV](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/riakkv/README.md): Collect + database stats from the `/stats` endpoint. +- [Zookeeper](https://github.com/netdata/go.d.plugin/blob/master/modules/zookeeper/README.md): Monitor application health metrics reading the server's response to the `mntr` command. -- [Memcached](/collectors/python.d.plugin/memcached/README.md): Collect memory-caching system performance metrics. +- [Memcached](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/memcached/README.md): Collect + memory-caching system performance metrics. ### Distributed computing -- [BOINC](/collectors/python.d.plugin/boinc/README.md): Monitor the total number of tasks, open tasks, and task +- [BOINC](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/boinc/README.md): Monitor the total + number of tasks, open tasks, and task states for the distributed computing client. -- [Gearman](/collectors/python.d.plugin/gearman/README.md): Collect application summary (queued, running) and per-job +- [Gearman](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/gearman/README.md): Collect + application summary (queued, running) and per-job worker statistics (queued, idle, running). ### Email -- [Dovecot](/collectors/python.d.plugin/dovecot/README.md): Collect email server performance metrics by reading the +- [Dovecot](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/dovecot/README.md): Collect email + server performance metrics by reading the server's response to the `EXPORT global` command. -- [EXIM](/collectors/python.d.plugin/exim/README.md): Uses the `exim` tool to monitor the queue length of a +- [EXIM](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/exim/README.md): Uses the `exim` tool + to monitor the queue length of a mail/message transfer agent (MTA). -- [Postfix](/collectors/python.d.plugin/postfix/README.md): Uses the `postqueue` tool to monitor the queue length of a +- [Postfix](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/postfix/README.md): Uses + the `postqueue` tool to monitor the queue length of a mail/message transfer agent (MTA). ### Kubernetes -- [Kubelet](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/k8s_kubelet/): Monitor one or more +- [Kubelet](https://github.com/netdata/go.d.plugin/blob/master/modules/k8s_kubelet/README.md): Monitor one or more instances of the Kubelet agent and collects metrics on number of pods/containers running, volume of Docker operations, and more. -- [kube-proxy](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/k8s_kubeproxy/): Collect +- [kube-proxy](https://github.com/netdata/go.d.plugin/blob/master/modules/k8s_kubeproxy/README.md): Collect metrics, such as syncing proxy rules and REST client requests, from one or more instances of `kube-proxy`. -- [Service discovery](https://github.com/netdata/agent-service-discovery/): Find what services are running on a +- [Service discovery](https://github.com/netdata/agent-service-discovery/README.md): Find what services are running on a cluster's pods, converts that into configuration files, and exports them so they can be monitored by Netdata. ### Logs -- [Fluentd](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/fluentd/): Gather application +- [Fluentd](https://github.com/netdata/go.d.plugin/blob/master/modules/fluentd/README.md): Gather application plugins metrics from an endpoint provided by `in_monitor plugin`. -- [Logstash](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/logstash/): Monitor JVM threads, +- [Logstash](https://github.com/netdata/go.d.plugin/blob/master/modules/logstash/README.md): Monitor JVM threads, memory usage, garbage collection statistics, and more. -- [OpenVPN status logs](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/openvpn_status_log): Parse +- [OpenVPN status logs](https://github.com/netdata/go.d.plugin/blob/master/modules/openvpn_status_log/README.md): Parse server log files and provide summary (client, traffic) metrics. -- [Squid web server logs](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/squidlog/): Tail Squid +- [Squid web server logs](https://github.com/netdata/go.d.plugin/blob/master/modules/squidlog/README.md): Tail Squid access logs to return the volume of requests, types of requests, bandwidth, and much more. - [Web server logs (Go version for Apache, - NGINX)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog/): Tail access logs and provide + NGINX)](https://github.com/netdata/go.d.plugin/blob/master/modules/weblog/README.md/): Tail access logs and provide very detailed web server performance statistics. This module is able to parse 200k+ rows in less than half a second. -- [Web server logs (Apache, NGINX)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog): Tail +- [Web server logs (Apache, NGINX)](https://github.com/netdata/go.d.plugin/blob/master/modules/weblog/README.md): Tail access log file and collect web server/caching proxy metrics. ### Messaging -- [ActiveMQ](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/activemq/): Collect message broker +- [ActiveMQ](https://github.com/netdata/go.d.plugin/blob/master/modules/activemq/README.md): Collect message broker queues and topics statistics using the ActiveMQ Console API. -- [Beanstalk](/collectors/python.d.plugin/beanstalk/README.md): Collect server and tube-level statistics, such as CPU +- [Beanstalk](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/beanstalk/README.md): Collect + server and tube-level statistics, such as CPU usage, jobs rates, commands, and more. -- [Pulsar](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/pulsar/): Collect summary, +- [Pulsar](https://github.com/netdata/go.d.plugin/blob/master/modules/pulsar/README.md): Collect summary, namespaces, and topics performance statistics. -- [RabbitMQ (Go)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/rabbitmq/): Collect message +- [RabbitMQ (Go)](https://github.com/netdata/go.d.plugin/blob/master/modules/rabbitmq/README.md): Collect message broker overview, system and per virtual host metrics. -- [RabbitMQ (Python)](/collectors/python.d.plugin/rabbitmq/README.md): Collect message broker global and per virtual +- [RabbitMQ (Python)](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/rabbitmq/README.md): + Collect message broker global and per virtual host metrics. -- [VerneMQ](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vernemq/): Monitor MQTT broker +- [VerneMQ](https://github.com/netdata/go.d.plugin/blob/master/modules/vernemq/README.md): Monitor MQTT broker health and performance metrics. It collects all available info for both MQTTv3 and v5 communication ### Network -- [Bind 9](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/bind/): Collect nameserver summary +- [Bind 9](https://github.com/netdata/go.d.plugin/blob/master/modules/bind/README.md): Collect nameserver summary performance statistics via a web interface (`statistics-channels` feature). -- [Chrony](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/chrony): Monitor the precision and +- [Chrony](https://github.com/netdata/go.d.plugin/blob/master/modules/chrony/README.md): Monitor the precision and statistics of a local `chronyd` server. -- [CoreDNS](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/coredns/): Measure DNS query round +- [CoreDNS](https://github.com/netdata/go.d.plugin/blob/master/modules/coredns/README.md): Measure DNS query round trip time. -- [Dnsmasq](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/dnsmasq_dhcp/): Automatically +- [Dnsmasq](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsmasq_dhcp/README.md): Automatically detects all configured `Dnsmasq` DHCP ranges and Monitor their utilization. -- [DNSdist](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/dnsdist/): Collect +- [DNSdist](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsdist/README.md): Collect load-balancer performance and health metrics. -- [Dnsmasq DNS Forwarder](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/dnsmasq/): Gather +- [Dnsmasq DNS Forwarder](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsmasq/README.md): Gather queries, entries, operations, and events for the lightweight DNS forwarder. -- [DNS Query Time](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/dnsquery/): Monitor the round +- [DNS Query Time](https://github.com/netdata/go.d.plugin/blob/master/modules/dnsquery/README.md): Monitor the round trip time for DNS queries in milliseconds. -- [Freeradius](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/freeradius/): Collect +- [Freeradius](https://github.com/netdata/go.d.plugin/blob/master/modules/freeradius/README.md): Collect server authentication and accounting statistics from the `status server`. -- [Libreswan](/collectors/charts.d.plugin/libreswan/README.md): Collect bytes-in, bytes-out, and uptime metrics. -- [Icecast](/collectors/python.d.plugin/icecast/README.md): Monitor the number of listeners for active sources. -- [ISC Bind (RDNC)](/collectors/python.d.plugin/bind_rndc/README.md): Collect nameserver summary performance +- [Libreswan](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/libreswan/README.md): Collect + bytes-in, bytes-out, and uptime metrics. +- [Icecast](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/icecast/README.md): Monitor the + number of listeners for active sources. +- [ISC Bind (RDNC)](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/bind_rndc/README.md): + Collect nameserver summary performance statistics using the `rndc` tool. -- [ISC DHCP](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/isc_dhcpd): Reads a +- [ISC DHCP](https://github.com/netdata/go.d.plugin/blob/master/modules/isc_dhcpd/README.md): Reads a `dhcpd.leases` file and collects metrics on total active leases, pool active leases, and pool utilization. -- [OpenLDAP](/collectors/python.d.plugin/openldap/README.md): Provides statistics information from the OpenLDAP +- [OpenLDAP](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/openldap/README.md): Provides + statistics information from the OpenLDAP (`slapd`) server. -- [NSD](/collectors/python.d.plugin/nsd/README.md): Monitor nameserver performance metrics using the `nsd-control` +- [NSD](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/nsd/README.md): Monitor nameserver + performance metrics using the `nsd-control` tool. - [NTP daemon](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/ntpd): Monitor the system variables of the local `ntpd` daemon (optionally including variables of the polled peers) using the NTP Control Message Protocol via a UDP socket. -- [OpenSIPS](/collectors/charts.d.plugin/opensips/README.md): Collect server health and performance metrics using the +- [OpenSIPS](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/opensips/README.md): Collect + server health and performance metrics using the `opensipsctl` tool. -- [OpenVPN](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/openvpn/): Gather server summary +- [OpenVPN](https://github.com/netdata/go.d.plugin/blob/master/modules/openvpn/README.md): Gather server summary (client, traffic) and per user metrics (traffic, connection time) stats using `management-interface`. -- [Pi-hole](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/pihole/): Monitor basic (DNS +- [Pi-hole](https://github.com/netdata/go.d.plugin/blob/master/modules/pihole/README.md): Monitor basic (DNS queries, clients, blocklist) and extended (top clients, top permitted, and blocked domains) statistics using the PHP API. -- [PowerDNS Authoritative Server](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/powerdns): +- [PowerDNS Authoritative Server](https://github.com/netdata/go.d.plugin/blob/master/modules/powerdns/README.md): Monitor one or more instances of the nameserver software to collect questions, events, and latency metrics. -- [PowerDNS Recursor](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/powerdns_recursor): +- [PowerDNS Recursor](https://github.com/netdata/go.d.plugin/blob/master/modules/powerdns/README.md_recursor): Gather incoming/outgoing questions, drops, timeouts, and cache usage from any number of DNS recursor instances. -- [RetroShare](/collectors/python.d.plugin/retroshare/README.md): Monitor application bandwidth, peers, and DHT +- [RetroShare](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/retroshare/README.md): Monitor + application bandwidth, peers, and DHT metrics. -- [Tor](/collectors/python.d.plugin/tor/README.md): Capture traffic usage statistics using the Tor control port. -- [Unbound](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/unbound/): Collect DNS resolver +- [Tor](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/tor/README.md): Capture traffic usage + statistics using the Tor control port. +- [Unbound](https://github.com/netdata/go.d.plugin/blob/master/modules/unbound/README.md): Collect DNS resolver summary and extended system and per thread metrics via the `remote-control` interface. ### Provisioning -- [Puppet](/collectors/python.d.plugin/puppet/README.md): Monitor the status of Puppet Server and Puppet DB. +- [Puppet](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/puppet/README.md): Monitor the + status of Puppet Server and Puppet DB. ### Remote devices -- [AM2320](/collectors/python.d.plugin/am2320/README.md): Monitor sensor temperature and humidity. -- [Access point](/collectors/charts.d.plugin/ap/README.md): Monitor client, traffic and signal metrics using the `aw` +- [AM2320](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/am2320/README.md): Monitor sensor + temperature and humidity. +- [Access point](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/ap/README.md): Monitor + client, traffic and signal metrics using the `aw` tool. -- [APC UPS](/collectors/charts.d.plugin/apcupsd/README.md): Capture status information using the `apcaccess` tool. -- [Energi Core](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/energid): Monitor +- [APC UPS](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/apcupsd/README.md): Capture status + information using the `apcaccess` tool. +- [Energi Core](https://github.com/netdata/go.d.plugin/blob/master/modules/energid/README.md): Monitor blockchain indexes, memory usage, network usage, and transactions of wallet instances. -- [UPS/PDU](/collectors/charts.d.plugin/nut/README.md): Read the status of UPS/PDU devices using the `upsc` tool. -- [SNMP devices](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/snmp): Gather data using the SNMP +- [UPS/PDU](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/nut/README.md): Read the status of + UPS/PDU devices using the `upsc` tool. +- [SNMP devices](https://github.com/netdata/go.d.plugin/blob/master/modules/snmp/README.md): Gather data using the SNMP protocol. -- [1-Wire sensors](/collectors/python.d.plugin/w1sensor/README.md): Monitor sensor temperature. +- [1-Wire sensors](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/w1sensor/README.md): + Monitor sensor temperature. ### Search -- [Elasticsearch](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/elasticsearch): Collect +- [Elasticsearch](https://github.com/netdata/go.d.plugin/blob/master/modules/elasticsearch/README.md): Collect dozens of metrics on search engine performance from local nodes and local indices. Includes cluster health and statistics. -- [Solr](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/solr/): Collect application search +- [Solr](https://github.com/netdata/go.d.plugin/blob/master/modules/solr/README.md): Collect application search requests, search errors, update requests, and update errors statistics. ### Storage -- [Ceph](/collectors/python.d.plugin/ceph/README.md): Monitor the Ceph cluster usage and server data consumption. -- [HDFS](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/hdfs/): Monitor health and performance +- [Ceph](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/ceph/README.md): Monitor the Ceph + cluster usage and server data consumption. +- [HDFS](https://github.com/netdata/go.d.plugin/blob/master/modules/hdfs/README.md): Monitor health and performance metrics for filesystem datanodes and namenodes. -- [IPFS](/collectors/python.d.plugin/ipfs/README.md): Collect file system bandwidth, peers, and repo metrics. -- [Scaleio](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/scaleio/): Monitor storage system, +- [IPFS](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/ipfs/README.md): Collect file system + bandwidth, peers, and repo metrics. +- [Scaleio](https://github.com/netdata/go.d.plugin/blob/master/modules/scaleio/README.md): Monitor storage system, storage pools, and SDCS health and performance metrics via VxFlex OS Gateway API. -- [Samba](/collectors/python.d.plugin/samba/README.md): Collect file sharing metrics using the `smbstatus` tool. +- [Samba](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/samba/README.md): Collect file + sharing metrics using the `smbstatus` tool. ### Web -- [Apache](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/apache/): Collect Apache web +- [Apache](https://github.com/netdata/go.d.plugin/blob/master/modules/apache/README.md): Collect Apache web server performance metrics via the `server-status?auto` endpoint. -- [HAProxy](/collectors/python.d.plugin/haproxy/README.md): Collect frontend, backend, and health metrics. -- [HTTP endpoints](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/httpcheck/): Monitor +- [HAProxy](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/haproxy/README.md): Collect + frontend, backend, and health metrics. +- [HTTP endpoints](https://github.com/netdata/go.d.plugin/blob/master/modules/httpcheck/README.md): Monitor any HTTP endpoint's availability and response time. -- [Lighttpd](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/lighttpd/): Collect web server +- [Lighttpd](https://github.com/netdata/go.d.plugin/blob/master/modules/lighttpd/README.md): Collect web server performance metrics using the `server-status?auto` endpoint. -- [Lighttpd2](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/lighttpd2/): Collect web server +- [Lighttpd2](https://github.com/netdata/go.d.plugin/blob/master/modules/lighttpd2/README.md): Collect web server performance metrics using the `server-status?format=plain` endpoint. -- [Litespeed](/collectors/python.d.plugin/litespeed/README.md): Collect web server data (network, connection, +- [Litespeed](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/litespeed/README.md): Collect + web server data (network, connection, requests, cache) by reading `.rtreport*` files. -- [Nginx](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/nginx/): Monitor web server +- [Nginx](https://github.com/netdata/go.d.plugin/blob/master/modules/nginx/README.md): Monitor web server status information by gathering metrics via `ngx_http_stub_status_module`. -- [Nginx VTS](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/nginxvts/): Gathers metrics from +- [Nginx VTS](https://github.com/netdata/go.d.plugin/blob/master/modules/nginxvts/README.md): Gathers metrics from any Nginx deployment with the _virtual host traffic status module_ enabled, including metrics on uptime, memory usage, and cache, and more. -- [PHP-FPM](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/phpfpm/): Collect application +- [PHP-FPM](https://github.com/netdata/go.d.plugin/blob/master/modules/phpfpm/README.md): Collect application summary and processes health metrics by scraping the status page (`/status?full`). -- [TCP endpoints](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/portcheck/): Monitor any +- [TCP endpoints](https://github.com/netdata/go.d.plugin/blob/master/modules/portcheck/README.md): Monitor any TCP endpoint's availability and response time. -- [Spigot Minecraft servers](/collectors/python.d.plugin/spigotmc/README.md): Monitor average ticket rate and number +- [Spigot Minecraft servers](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/spigotmc/README.md): + Monitor average ticket rate and number of users. -- [Squid](/collectors/python.d.plugin/squid/README.md): Monitor client and server bandwidth/requests by gathering +- [Squid](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/squid/README.md): Monitor client and + server bandwidth/requests by gathering data from the Cache Manager component. -- [Tengine](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/tengine/): Monitor web server +- [Tengine](https://github.com/netdata/go.d.plugin/blob/master/modules/tengine/README.md): Monitor web server statistics using information provided by `ngx_http_reqstat_module`. -- [Tomcat](/collectors/python.d.plugin/tomcat/README.md): Collect web server performance metrics from the Manager App +- [Tomcat](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/tomcat/README.md): Collect web + server performance metrics from the Manager App (`/manager/status?XML=true`). -- [Traefik](/collectors/python.d.plugin/traefik/README.md): Uses Traefik's Health API to provide statistics. -- [Varnish](/collectors/python.d.plugin/varnish/README.md): Provides HTTP accelerator global, backends (VBE), and +- [Traefik](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/traefik/README.md): Uses Traefik's + Health API to provide statistics. +- [Varnish](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/varnish/README.md): Provides HTTP + accelerator global, backends (VBE), and disks (SMF) statistics using the `varnishstat` tool. -- [x509 check](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/x509check/): Monitor certificate +- [x509 check](https://github.com/netdata/go.d.plugin/blob/master/modules/x509check/README.md): Monitor certificate expiration time. -- [Whois domain expiry](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/whoisquery/): Checks the +- [Whois domain expiry](https://github.com/netdata/go.d.plugin/blob/master/modules/whoisquery/README.md): Checks the remaining time until a given domain is expired. ## System collectors @@ -335,139 +382,198 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v ### Applications -- [Fail2ban](/collectors/python.d.plugin/fail2ban/README.md): Parses configuration files to detect all jails, then +- [Fail2ban](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/fail2ban/README.md): Parses + configuration files to detect all jails, then uses log files to report ban rates and volume of banned IPs. -- [Monit](/collectors/python.d.plugin/monit/README.md): Monitor statuses of targets (service-checks) using the XML +- [Monit](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/monit/README.md): Monitor statuses + of targets (service-checks) using the XML stats interface. - [WMI (Windows Management Instrumentation) - exporter](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/wmi/): Collect CPU, memory, + exporter](https://github.com/netdata/go.d.plugin/blob/master/modules/wmi/README.md): Collect CPU, memory, network, disk, OS, system, and log-in metrics scraping `wmi_exporter`. ### Disks and filesystems -- [BCACHE](/collectors/proc.plugin/README.md): Monitor BCACHE statistics with the the `proc.plugin` collector. -- [Block devices](/collectors/proc.plugin/README.md): Gather metrics about the health and performance of block +- [BCACHE](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor BCACHE statistics + with the the `proc.plugin` collector. +- [Block devices](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Gather metrics about + the health and performance of block devices using the the `proc.plugin` collector. -- [Btrfs](/collectors/proc.plugin/README.md): Monitors Btrfs filesystems with the the `proc.plugin` collector. -- [Device mapper](/collectors/proc.plugin/README.md): Gather metrics about the Linux device mapper with the proc +- [Btrfs](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitors Btrfs filesystems + with the the `proc.plugin` collector. +- [Device mapper](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Gather metrics about + the Linux device mapper with the proc collector. -- [Disk space](/collectors/diskspace.plugin/README.md): Collect disk space usage metrics on Linux mount points. -- [Clock synchronization](/collectors/timex.plugin/README.md): Collect the system clock synchronization status on Linux. -- [Files and directories](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/filecheck): Gather +- [Disk space](https://github.com/netdata/netdata/blob/master/collectors/diskspace.plugin/README.md): Collect disk space + usage metrics on Linux mount points. +- [Clock synchronization](https://github.com/netdata/netdata/blob/master/collectors/timex.plugin/README.md): Collect the + system clock synchronization status on Linux. +- [Files and directories](https://github.com/netdata/go.d.plugin/blob/master/modules/filecheck/README.md): Gather metrics about the existence, modification time, and size of files or directories. -- [ioping.plugin](/collectors/ioping.plugin/README.md): Measure disk read/write latency. -- [NFS file servers and clients](/collectors/proc.plugin/README.md): Gather operations, utilization, and space usage +- [ioping.plugin](https://github.com/netdata/netdata/blob/master/collectors/ioping.plugin/README.md): Measure disk + read/write latency. +- [NFS file servers and clients](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): + Gather operations, utilization, and space usage using the the `proc.plugin` collector. -- [RAID arrays](/collectors/proc.plugin/README.md): Collect health, disk status, operation status, and more with the +- [RAID arrays](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Collect health, disk + status, operation status, and more with the the `proc.plugin` collector. -- [Veritas Volume Manager](/collectors/proc.plugin/README.md): Gather metrics about the Veritas Volume Manager (VVM). -- [ZFS](/collectors/proc.plugin/README.md): Monitor bandwidth and utilization of ZFS disks/partitions using the proc +- [Veritas Volume Manager](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Gather + metrics about the Veritas Volume Manager (VVM). +- [ZFS](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor bandwidth and + utilization of ZFS disks/partitions using the proc collector. ### eBPF -- [Files](/collectors/ebpf.plugin/README.md): Provides information about how often a system calls kernel +- [Files](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md): Provides information about + how often a system calls kernel functions related to file descriptors using the eBPF collector. -- [Virtual file system (VFS)](/collectors/ebpf.plugin/README.md): Monitor IO, errors, deleted objects, and +- [Virtual file system (VFS)](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md): Monitor + IO, errors, deleted objects, and more for kernel virtual file systems (VFS) using the eBPF collector. -- [Processes](/collectors/ebpf.plugin/README.md): Monitor threads, task exits, and errors using the eBPF collector. +- [Processes](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md): Monitor threads, task + exits, and errors using the eBPF collector. ### Hardware -- [Adaptec RAID](/collectors/python.d.plugin/adaptec_raid/README.md): Monitor logical and physical devices health +- [Adaptec RAID](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/adaptec_raid/README.md): + Monitor logical and physical devices health metrics using the `arcconf` tool. -- [CUPS](/collectors/cups.plugin/README.md): Monitor CUPS. -- [FreeIPMI](/collectors/freeipmi.plugin/README.md): Uses `libipmimonitoring-dev` or `libipmimonitoring-devel` to +- [CUPS](https://github.com/netdata/netdata/blob/master/collectors/cups.plugin/README.md): Monitor CUPS. +- [FreeIPMI](https://github.com/netdata/netdata/blob/master/collectors/freeipmi.plugin/README.md): + Uses `libipmimonitoring-dev` or `libipmimonitoring-devel` to monitor the number of sensors, temperatures, voltages, currents, and more. -- [Hard drive temperature](/collectors/python.d.plugin/hddtemp/README.md): Monitor the temperature of storage +- [Hard drive temperature](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/hddtemp/README.md): + Monitor the temperature of storage devices. -- [HP Smart Storage Arrays](/collectors/python.d.plugin/hpssa/README.md): Monitor controller, cache module, logical +- [HP Smart Storage Arrays](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/hpssa/README.md): + Monitor controller, cache module, logical and physical drive state, and temperature using the `ssacli` tool. -- [MegaRAID controllers](/collectors/python.d.plugin/megacli/README.md): Collect adapter, physical drives, and +- [MegaRAID controllers](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/megacli/README.md): + Collect adapter, physical drives, and battery stats using the `megacli` tool. -- [NVIDIA GPU](/collectors/python.d.plugin/nvidia_smi/README.md): Monitor performance metrics (memory usage, fan +- [NVIDIA GPU](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/nvidia_smi/README.md): Monitor + performance metrics (memory usage, fan speed, pcie bandwidth utilization, temperature, and more) using the `nvidia-smi` tool. -- [Sensors](/collectors/python.d.plugin/sensors/README.md): Reads system sensors information (temperature, voltage, +- [Sensors](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/sensors/README.md): Reads system + sensors information (temperature, voltage, electric current, power, and more) from `/sys/devices/`. -- [S.M.A.R.T](/collectors/python.d.plugin/smartd_log/README.md): Reads SMART Disk Monitoring daemon logs. +- [S.M.A.R.T](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/smartd_log/README.md): Reads + SMART Disk Monitoring daemon logs. ### Memory -- [Available memory](/collectors/proc.plugin/README.md): Tracks changes in available RAM using the the `proc.plugin` +- [Available memory](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Tracks changes in + available RAM using the the `proc.plugin` collector. -- [Committed memory](/collectors/proc.plugin/README.md): Monitor committed memory using the `proc.plugin` collector. -- [Huge pages](/collectors/proc.plugin/README.md): Gather metrics about huge pages in Linux and FreeBSD with the +- [Committed memory](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor committed + memory using the `proc.plugin` collector. +- [Huge pages](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Gather metrics about + huge pages in Linux and FreeBSD with the `proc.plugin` collector. -- [KSM](/collectors/proc.plugin/README.md): Measure the amount of merging, savings, and effectiveness using the +- [KSM](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Measure the amount of merging, + savings, and effectiveness using the `proc.plugin` collector. -- [Numa](/collectors/proc.plugin/README.md): Gather metrics on the number of non-uniform memory access (NUMA) events +- [Numa](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Gather metrics on the number + of non-uniform memory access (NUMA) events every second using the `proc.plugin` collector. -- [Page faults](/collectors/proc.plugin/README.md): Collect the number of memory page faults per second using the +- [Page faults](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Collect the number of + memory page faults per second using the `proc.plugin` collector. -- [RAM](/collectors/proc.plugin/README.md): Collect metrics on system RAM, available RAM, and more using the +- [RAM](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Collect metrics on system RAM, + available RAM, and more using the `proc.plugin` collector. -- [SLAB](/collectors/slabinfo.plugin/README.md): Collect kernel SLAB details on Linux systems. -- [swap](/collectors/proc.plugin/README.md): Monitor the amount of free and used swap at every second using the +- [SLAB](https://github.com/netdata/netdata/blob/master/collectors/slabinfo.plugin/README.md): Collect kernel SLAB + details on Linux systems. +- [swap](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor the amount of free + and used swap at every second using the `proc.plugin` collector. -- [Writeback memory](/collectors/proc.plugin/README.md): Collect how much memory is actively being written to disk at +- [Writeback memory](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Collect how much + memory is actively being written to disk at every second using the `proc.plugin` collector. ### Networks -- [Access points](/collectors/charts.d.plugin/ap/README.md): Visualizes data related to access points. -- [Ping](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/ping/): Measure network latency, jitter and packet loss between the monitored node +- [Access points](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/ap/README.md): Visualizes + data related to access points. +- [Ping](https://github.com/netdata/go.d.plugin/blob/master/modules/ping/README.md): Measure network latency, jitter and + packet loss between the monitored node and any number of remote network end points. -- [Netfilter](/collectors/nfacct.plugin/README.md): Collect netfilter firewall, connection tracker, and accounting +- [Netfilter](https://github.com/netdata/netdata/blob/master/collectors/nfacct.plugin/README.md): Collect netfilter + firewall, connection tracker, and accounting metrics using `libmnl` and `libnetfilter_acct`. -- [Network stack](/collectors/proc.plugin/README.md): Monitor the networking stack for errors, TCP connection aborts, +- [Network stack](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor the + networking stack for errors, TCP connection aborts, bandwidth, and more. -- [Network QoS](/collectors/tc.plugin/README.md): Collect traffic QoS metrics (`tc`) of Linux network interfaces. -- [SYNPROXY](/collectors/proc.plugin/README.md): Monitor entries uses, SYN packets received, TCP cookies, and more. +- [Network QoS](https://github.com/netdata/netdata/blob/master/collectors/tc.plugin/README.md): Collect traffic QoS + metrics (`tc`) of Linux network interfaces. +- [SYNPROXY](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor entries uses, SYN + packets received, TCP cookies, and more. ### Operating systems -- [freebsd.plugin](freebsd.plugin/README.md): Collect resource usage and performance data on FreeBSD systems. -- [macOS](/collectors/macos.plugin/README.md): Collect resource usage and performance data on macOS systems. +- [freebsd.plugin](https://github.com/netdata/netdata/blob/master/collectors/freebsd.plugin/README.md): Collect resource + usage and performance data on FreeBSD systems. +- [macOS](https://github.com/netdata/netdata/blob/master/collectors/macos.plugin/README.md): Collect resource usage and + performance data on macOS systems. ### Processes -- [Applications](/collectors/apps.plugin/README.md): Gather CPU, disk, memory, network, eBPF, and other metrics per +- [Applications](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md): Gather CPU, disk, + memory, network, eBPF, and other metrics per application using the `apps.plugin` collector. -- [systemd](/collectors/cgroups.plugin/README.md): Monitor the CPU and memory usage of systemd services using the +- [systemd](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md): Monitor the CPU and + memory usage of systemd services using the `cgroups.plugin` collector. -- [systemd unit states](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/systemdunits): See the +- [systemd unit states](https://github.com/netdata/go.d.plugin/blob/master/modules/systemdunits/README.md): See the state (active, inactive, activating, deactivating, failed) of various systemd unit types. -- [System processes](/collectors/proc.plugin/README.md): Collect metrics on system load and total processes running +- [System processes](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Collect metrics + on system load and total processes running using `/proc/loadavg` and the `proc.plugin` collector. -- [Uptime](/collectors/proc.plugin/README.md): Monitor the uptime of a system using the `proc.plugin` collector. +- [Uptime](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor the uptime of a + system using the `proc.plugin` collector. ### Resources -- [CPU frequency](/collectors/proc.plugin/README.md): Monitor CPU frequency, as set by the `cpufreq` kernel module, +- [CPU frequency](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor CPU + frequency, as set by the `cpufreq` kernel module, using the `proc.plugin` collector. -- [CPU idle](/collectors/proc.plugin/README.md): Measure CPU idle every second using the `proc.plugin` collector. -- [CPU performance](/collectors/perf.plugin/README.md): Collect CPU performance metrics using performance monitoring +- [CPU idle](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Measure CPU idle every + second using the `proc.plugin` collector. +- [CPU performance](https://github.com/netdata/netdata/blob/master/collectors/perf.plugin/README.md): Collect CPU + performance metrics using performance monitoring units (PMU). -- [CPU throttling](/collectors/proc.plugin/README.md): Gather metrics about thermal throttling using the `/proc/stat` +- [CPU throttling](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Gather metrics + about thermal throttling using the `/proc/stat` module and the `proc.plugin` collector. -- [CPU utilization](/collectors/proc.plugin/README.md): Capture CPU utilization, both system-wide and per-core, using +- [CPU utilization](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Capture CPU + utilization, both system-wide and per-core, using the `/proc/stat` module and the `proc.plugin` collector. -- [Entropy](/collectors/proc.plugin/README.md): Monitor the available entropy on a system using the `proc.plugin` +- [Entropy](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor the available + entropy on a system using the `proc.plugin` collector. -- [Interprocess Communication (IPC)](/collectors/proc.plugin/README.md): Monitor IPC semaphores and shared memory +- [Interprocess Communication (IPC)](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): + Monitor IPC semaphores and shared memory using the `proc.plugin` collector. -- [Interrupts](/collectors/proc.plugin/README.md): Monitor interrupts per second using the `proc.plugin` collector. -- [IdleJitter](/collectors/idlejitter.plugin/README.md): Measure CPU latency and jitter on all operating systems. -- [SoftIRQs](/collectors/proc.plugin/README.md): Collect metrics on SoftIRQs, both system-wide and per-core, using the +- [Interrupts](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Monitor interrupts per + second using the `proc.plugin` collector. +- [IdleJitter](https://github.com/netdata/netdata/blob/master/collectors/idlejitter.plugin/README.md): Measure CPU + latency and jitter on all operating systems. +- [SoftIRQs](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Collect metrics on + SoftIRQs, both system-wide and per-core, using the `proc.plugin` collector. -- [SoftNet](/collectors/proc.plugin/README.md): Capture SoftNet events per second, both system-wide and per-core, +- [SoftNet](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md): Capture SoftNet events per + second, both system-wide and per-core, using the `proc.plugin` collector. ### Users -- [systemd-logind](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/logind/): Monitor active sessions, users, and seats tracked +- [systemd-logind](https://github.com/netdata/go.d.plugin/blob/master/modules/logind/README.md): Monitor active + sessions, users, and seats tracked by `systemd-logind` or `elogind`. -- [User/group usage](/collectors/apps.plugin/README.md): Gather CPU, disk, memory, network, and other metrics per user +- [User/group usage](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md): Gather CPU, disk, + memory, network, and other metrics per user and user group using the `apps.plugin` collector. ## Netdata collectors @@ -476,13 +582,18 @@ These collectors are recursive in nature, in that they monitor some function of collectors are described only in code and associated charts in Netdata dashboards. - [ACLK (code only)](https://github.com/netdata/netdata/blob/master/aclk/legacy/aclk_stats.c): View whether a Netdata - Agent is connected to Netdata Cloud via the [ACLK](/aclk/README.md), the volume of queries, process times, and more. -- [Alarms](https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/alarms): This collector creates an + Agent is connected to Netdata Cloud via the [ACLK](https://github.com/netdata/netdata/blob/master/aclk/README.md), the + volume of queries, process times, and more. +- [Alarms](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/alarms/README.md): This collector + creates an **Alarms** menu with one line plot showing the alarm states of a Netdata Agent over time. -- [Anomalies](https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/anomalies): This collector uses the +- [Anomalies](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/anomalies/README.md): This + collector uses the Python PyOD library to perform unsupervised anomaly detection on your Netdata charts and/or dimensions. - [Exporting (code only)](https://github.com/netdata/netdata/blob/master/exporting/send_internal_metrics.c): Gather - metrics on CPU utilization for the [exporting engine](/exporting/README.md), and specific metrics for each enabled + metrics on CPU utilization for + the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md), and specific metrics for + each enabled exporting connector. - [Global statistics (code only)](https://github.com/netdata/netdata/blob/master/daemon/global_statistics.c): See metrics on the CPU utilization, network traffic, volume of web clients, API responses, database engine usage, and @@ -496,8 +607,10 @@ If you're interested in developing a new collector that you'd like to contribute the `go.d.plugin`. - [go.d.plugin](https://github.com/netdata/go.d.plugin): An orchestrator for data collection modules written in `go`. -- [python.d.plugin](python.d.plugin/README.md): An orchestrator for data collection modules written in `python` v2/v3. -- [charts.d.plugin](charts.d.plugin/README.md): An orchestrator for data collection modules written in `bash` v4+. +- [python.d.plugin](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/README.md): An + orchestrator for data collection modules written in `python` v2/v3. +- [charts.d.plugin](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/README.md): An + orchestrator for data collection modules written in `bash` v4+. ## Third-party collectors @@ -509,13 +622,17 @@ default. To use a third-party collector, visit their GitHub/documentation page a In general the below steps should be sufficient to use a third party collector. -1. Download collector code file into [folder expected by Netdata](https://learn.netdata.cloud/docs/agent/collectors/plugins.d#environment-variables). -2. Download default collector configuration file into [folder expected by Netdata](https://learn.netdata.cloud/docs/agent/collectors/plugins.d#environment-variables). -3. [Edit configuration file](/docs/collect/enable-configure#configure-a-collector) from step 2 if required. -4. [Enable collector](/docs/collect/enable-configure#enable-a-collector-or-its-orchestrator). -5. [Restart Netdata](/docs/configure/start-stop-restart.md) +1. Download collector code file + into [folder expected by Netdata](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md#environment-variables). +2. Download default collector configuration file + into [folder expected by Netdata](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md#environment-variables). +3. [Edit configuration file](https://github.com/netdata/netdata/blob/master/docs/collect/enable-configure#configure-a-collector) + from step 2 if required. +4. [Enable collector](https://github.com/netdata/netdata/blob/master/docs/collect/enable-configure#enable-a-collector-or-its-orchestrator). +5. [Restart Netdata](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) -For example below are the steps to enable the [Python ClickHouse collector](https://github.com/netdata/community/tree/main/collectors/python.d.plugin/clickhouse). +For example below are the steps to enable +the [Python ClickHouse collector](https://github.com/netdata/community/tree/main/collectors/python.d.plugin/clickhouse). ```bash # download python collector script to /usr/libexec/netdata/python.d/ @@ -538,7 +655,6 @@ $ sudo systemctl restart netdata - - [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls CyberPower UPS data using PowerPanel® Personal Linux. - [Logged-in users](https://github.com/veksh/netdata-numsessions): Collect the number of currently logged-on users. @@ -549,9 +665,12 @@ $ sudo systemctl restart netdata - [Teamspeak 3](https://github.com/coraxx/netdata_ts3_plugin): Pulls active users and bandwidth from TeamSpeak 3 servers. - [SSH](https://github.com/Yaser-Amiri/netdata-ssh-module): Monitor failed authentication requests of an SSH server. -- [ClickHouse](https://github.com/netdata/community/tree/main/collectors/python.d.plugin/clickhouse): Monitor [ClickHouse](https://clickhouse.com/) database. +- [ClickHouse](https://github.com/netdata/community/tree/main/collectors/python.d.plugin/clickhouse): + Monitor [ClickHouse](https://clickhouse.com/) database. ## Etc -- [charts.d example](charts.d.plugin/example/README.md): An example `charts.d` collector. -- [python.d example](python.d.plugin/example/README.md): An example `python.d` collector. +- [charts.d example](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/example/README.md): An + example `charts.d` collector. +- [python.d example](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/example/README.md): An + example `python.d` collector. diff --git a/collectors/README.md b/collectors/README.md index 9b1f71f397..91a4eeb449 100644 --- a/collectors/README.md +++ b/collectors/README.md @@ -11,42 +11,44 @@ learn_rel_path: "References/Collectors" # Collecting metrics Netdata can collect metrics from hundreds of different sources, be they internal data created by the system itself, or -external data created by services or applications. To see _all_ of the sources Netdata collects from, view our [list of -supported collectors](/collectors/COLLECTORS.md). +external data created by services or applications. To see _all_ of the sources Netdata collects from, view our +[list of supported collectors](https://github.com/netdata/netdata/blob/master/collectors/COLLECTORS.md). There are two essential points to understand about how collecting metrics works in Netdata: -- All collectors are **installed by default** with every installation of Netdata. You do not need to install - collectors manually to collect metrics from new sources. -- Upon startup, Netdata will **auto-detect** any application or service that has a - [collector](/collectors/COLLECTORS.md), as long as both the collector and the app/service are configured correctly. +- All collectors are **installed by default** with every installation of Netdata. You do not need to install + collectors manually to collect metrics from new sources. +- Upon startup, Netdata will **auto-detect** any application or service that has a + [collector](https://github.com/netdata/netdata/blob/master/collectors/COLLECTORS.md), as long as both the collector + and the app/service are configured correctly. Most users will want to enable a new Netdata collector for their app/service. For those details, see -our [collectors' configuration reference](/collectors/REFERENCE.md). +our [collectors' configuration reference](https://github.com/netdata/netdata/blob/master/collectors/REFERENCE.md). ## Take your next steps with collectors -[Supported collectors list](/collectors/COLLECTORS.md) +[Supported collectors list](https://github.com/netdata/netdata/blob/master/collectors/COLLECTORS.md) -[Collectors configuration reference](/collectors/REFERENCE.md) +[Collectors configuration reference](https://github.com/netdata/netdata/blob/master/collectors/REFERENCE.md) ## Guides -[Monitor Nginx or Apache web server log files with Netdata](/docs/guides/collect-apache-nginx-web-logs.md) +[Monitor Nginx or Apache web server log files with Netdata](https://github.com/netdata/netdata/blob/master/docs/guides/collect-apache-nginx-web-logs.md) -[Monitor CockroachDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md) +[Monitor CockroachDB metrics with Netdata](https://github.com/netdata/netdata/blob/master/docs/guides/monitor-cockroachdb.md) -[Monitor Unbound DNS servers with Netdata](/docs/guides/collect-unbound-metrics.md) +[Monitor Unbound DNS servers with Netdata](https://github.com/netdata/netdata/blob/master/docs/guides/collect-unbound-metrics.md) -[Monitor a Hadoop cluster with Netdata](/docs/guides/monitor-hadoop-cluster.md) +[Monitor a Hadoop cluster with Netdata](https://github.com/netdata/netdata/blob/master/docs/guides/monitor-hadoop-cluster.md) ## Related features -**[Dashboards](/web/README.md)**: Visualize your newly-collect metrics in real-time using Netdata's [built-in -dashboard](/web/gui/README.md). +**[Dashboards](https://github.com/netdata/netdata/blob/master/web/README.md)**: Visualize your newly-collect metrics in +real-time using Netdata's [built-in dashboard](https://github.com/netdata/netdata/blob/master/web/gui/README.md). -**[Exporting](/exporting/README.md)**: Extend our built-in [database engine](/database/engine/README.md), which supports -long-term metrics storage, by archiving metrics to external databases like Graphite, Prometheus, MongoDB, TimescaleDB, and more. -It can export metrics to multiple databases simultaneously. +**[Exporting](https://github.com/netdata/netdata/blob/master/exporting/README.md)**: Extend our +built-in [database engine](https://github.com/netdata/netdata/blob/master/database/engine/README.md), which supports +long-term metrics storage, by archiving metrics to external databases like Graphite, Prometheus, MongoDB, TimescaleDB, +and more. It can export metrics to multiple databases simultaneously. diff --git a/collectors/REFERENCE.md b/collectors/REFERENCE.md index 0793246589..e525b60ea9 100644 --- a/collectors/REFERENCE.md +++ b/collectors/REFERENCE.md @@ -23,7 +23,7 @@ independent processes in a variety of programming languages based on their purpo MySQL database, among many others. For most users, enabling individual collectors for the application/service you're interested in is far more important -than knowing which plugin it uses. See our [collectors list](/collectors/COLLECTORS.md) to see whether your favorite app/service has +than knowing which plugin it uses. See our [collectors list](https://github.com/netdata/netdata/blob/master/collectors/COLLECTORS.md) to see whether your favorite app/service has a collector, and then read the documentation for that specific collector to figure out how to enable it. There are three types of plugins: @@ -35,7 +35,7 @@ There are three types of plugins: independent processes. They communicate with the daemon via pipes. - **Plugin orchestrators**, which are external plugins that instead support a number of **modules**. Modules are a type of collector. We have a few plugin orchestrators available for those who want to develop their own collectors, - but focus most of our efforts on the [Go plugin](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/). + but focus most of our efforts on the [Go plugin](https://github.com/netdata/go.d.plugin/blob/master/README.md). ## Enable, configure, and disable modules @@ -169,5 +169,5 @@ through this, is to examine what other similar plugins do. ## Write a custom collector -You can add custom collectors by following the [external plugins documentation](/collectors/plugins.d/README.md). +You can add custom collectors by following the [external plugins documentation](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md). diff --git a/collectors/apps.plugin/README.md b/collectors/apps.plugin/README.md index 5a065dc497..ac0d349a23 100644 --- a/collectors/apps.plugin/README.md +++ b/collectors/apps.plugin/README.md @@ -66,8 +66,8 @@ Each of these sections provides the same number of charts: - Network - Sockets open (`apps.sockets`) -In addition, if the [eBPF collector](/collectors/ebpf.plugin/README.md) is running, your dashboard will also show an -additional [list of charts](/collectors/ebpf.plugin/README.md#integration-with-appsplugin) using low-level Linux +In addition, if the [eBPF collector](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md) is running, your dashboard will also show an +additional [list of charts](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md#integration-with-appsplugin) using low-level Linux metrics. The above are reported: @@ -163,10 +163,10 @@ There are a few command line options you can pass to `apps.plugin`. The list of ### Integration with eBPF If you don't see charts under the **eBPF syscall** or **eBPF net** sections, you should edit your -[`ebpf.d.conf`](/collectors/ebpf.plugin/README.md#configure-the-ebpf-collector) file to ensure the eBPF program is enabled. +[`ebpf.d.conf`](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md#configure-the-ebpf-collector) file to ensure the eBPF program is enabled. Also see our [guide on troubleshooting apps with eBPF -metrics](/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md) for ideas on how to interpret these charts in a +metrics](https://github.com/netdata/netdata/blob/master/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md) for ideas on how to interpret these charts in a few scenarios. ## Permissions @@ -237,7 +237,7 @@ Examples below for process group `sql`: - Open Pipes ![image](https://registry.my-netdata.io/api/v1/badge.svg?chart=apps.pipes&dimensions=sql&value_color=green=0%7Cred) - Open Sockets ![image](https://registry.my-netdata.io/api/v1/badge.svg?chart=apps.sockets&dimensions=sql&value_color=green%3E=3%7Cred) -For more information about badges check [Generating Badges](/web/api/badges/README.md) +For more information about badges check [Generating Badges](https://github.com/netdata/netdata/blob/master/web/api/badges/README.md) ## Comparison with console tools diff --git a/collectors/cgroups.plugin/README.md b/collectors/cgroups.plugin/README.md index 239ae6a821..e58f1ba04e 100644 --- a/collectors/cgroups.plugin/README.md +++ b/collectors/cgroups.plugin/README.md @@ -78,7 +78,7 @@ currently unsupported when using unified cgroups. ### enabled cgroups To provide a sane default, Netdata uses the -following [pattern list](https://learn.netdata.cloud/docs/agent/libnetdata/simple_pattern): +following [pattern list](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md): - checks the pattern against the path of the cgroup @@ -309,4 +309,4 @@ cannot find, but immediately: - I/O full pressure Network interfaces are monitored by means of -the [proc plugin](/collectors/proc.plugin/README.md#monitored-network-interface-metrics). +the [proc plugin](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md#monitored-network-interface-metrics). diff --git a/collectors/charts.d.plugin/README.md b/collectors/charts.d.plugin/README.md index a74d59c2f5..092a3f0272 100644 --- a/collectors/charts.d.plugin/README.md +++ b/collectors/charts.d.plugin/README.md @@ -64,11 +64,11 @@ For a module called `X`, the following criteria must be met: the collector cannot be used). - `X_create()` - creates the Netdata charts, following the standard Netdata plugin guides as described in - **[External Plugins](/collectors/plugins.d/README.md)** (commands `CHART` and `DIMENSION`). + **[External Plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md)** (commands `CHART` and `DIMENSION`). The return value does matter: 0 = OK, 1 = FAILED. - `X_update()` - collects the values for the defined charts, following the standard Netdata plugin guides - as described in **[External Plugins](/collectors/plugins.d/README.md)** (commands `BEGIN`, `SET`, `END`). + as described in **[External Plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md)** (commands `BEGIN`, `SET`, `END`). The return value also matters: 0 = OK, 1 = FAILED. 5. The following global variables are available to be set: @@ -76,7 +76,7 @@ For a module called `X`, the following criteria must be met: The module script may use more functions or variables. But all of them must begin with `X_`. -The standard Netdata plugin variables are also available (check **[External Plugins](/collectors/plugins.d/README.md)**). +The standard Netdata plugin variables are also available (check **[External Plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md)**). ### X_check() @@ -90,7 +90,7 @@ connect to a local mysql database to find out if it can read the values it needs ### X_create() The purpose of the BASH function `X_create()` is to create the charts and dimensions using the standard Netdata -plugin guides (**[External Plugins](/collectors/plugins.d/README.md)**). +plugin guides (**[External Plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md)**). `X_create()` will be called just once and only after `X_check()` was successful. You can however call it yourself when there is need for it (for example to add a new dimension to an existing chart). @@ -100,7 +100,7 @@ A non-zero return value will disable the collector. ### X_update() `X_update()` will be called repeatedly every `X_update_every` seconds, to collect new values and send them to Netdata, -following the Netdata plugin guides (**[External Plugins](/collectors/plugins.d/README.md)**). +following the Netdata plugin guides (**[External Plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md)**). The function will be called with one parameter: microseconds since the last time it was run. This value should be appended to the `BEGIN` statement of every chart updated by the collector script. diff --git a/collectors/charts.d.plugin/ap/README.md b/collectors/charts.d.plugin/ap/README.md index 15214f52c8..03ab6d13e1 100644 --- a/collectors/charts.d.plugin/ap/README.md +++ b/collectors/charts.d.plugin/ap/README.md @@ -86,7 +86,7 @@ Station 40:b8:37:5a:ed:5e (on wlan0) ## Configuration Edit the `charts.d/ap.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/charts.d.plugin/apcupsd/README.md b/collectors/charts.d.plugin/apcupsd/README.md index abeaa1c201..602977be1c 100644 --- a/collectors/charts.d.plugin/apcupsd/README.md +++ b/collectors/charts.d.plugin/apcupsd/README.md @@ -14,7 +14,7 @@ Monitors different APC UPS models and retrieves status information using `apcacc ## Configuration Edit the `charts.d/apcupsd.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/charts.d.plugin/libreswan/README.md b/collectors/charts.d.plugin/libreswan/README.md index 8c52062fdf..7c4eabcf96 100644 --- a/collectors/charts.d.plugin/libreswan/README.md +++ b/collectors/charts.d.plugin/libreswan/README.md @@ -25,7 +25,7 @@ The following charts are created, **per tunnel**: ## Configuration Edit the `charts.d/libreswan.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/charts.d.plugin/nut/README.md b/collectors/charts.d.plugin/nut/README.md index cc5f3d2736..7bb8a55075 100644 --- a/collectors/charts.d.plugin/nut/README.md +++ b/collectors/charts.d.plugin/nut/README.md @@ -54,7 +54,7 @@ The following charts will be created: ## Configuration Edit the `charts.d/nut.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/charts.d.plugin/opensips/README.md b/collectors/charts.d.plugin/opensips/README.md index 973b2ce846..74624c7f11 100644 --- a/collectors/charts.d.plugin/opensips/README.md +++ b/collectors/charts.d.plugin/opensips/README.md @@ -12,7 +12,7 @@ learn_rel_path: "References/Collectors references/Networking" ## Configuration Edit the `charts.d/opensips.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/charts.d.plugin/sensors/README.md b/collectors/charts.d.plugin/sensors/README.md index 5029425254..142ae14aaa 100644 --- a/collectors/charts.d.plugin/sensors/README.md +++ b/collectors/charts.d.plugin/sensors/README.md @@ -31,7 +31,7 @@ One chart for every sensor chip found and each of the above will be created. ## Enable the collector The `sensors` collector is disabled by default. To enable it, edit the `charts.d.conf` file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -48,7 +48,7 @@ sensors=force ## Configuration Edit the `charts.d/sensors.conf` configuration file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/diskspace.plugin/README.md b/collectors/diskspace.plugin/README.md index 5095f4e4f3..6d1ec7ca29 100644 --- a/collectors/diskspace.plugin/README.md +++ b/collectors/diskspace.plugin/README.md @@ -42,6 +42,6 @@ Charts can be enabled/disabled for every mount separately: # inodes usage = auto ``` -> for disks performance monitoring, see the `proc` plugin, [here](/collectors/proc.plugin/README.md#monitoring-disks) +> for disks performance monitoring, see the `proc` plugin, [here](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md#monitoring-disks) diff --git a/collectors/ebpf.plugin/README.md b/collectors/ebpf.plugin/README.md index f9844e901d..deedf4d795 100644 --- a/collectors/ebpf.plugin/README.md +++ b/collectors/ebpf.plugin/README.md @@ -15,7 +15,7 @@ The Netdata Agent provides many [eBPF](https://ebpf.io/what-is-ebpf/) programs t > ❗ eBPF monitoring only works on Linux systems and with specific Linux kernels, including all kernels newer than `4.11.0`, and all kernels on CentOS 7.6 or later. For kernels older than `4.11.0`, improved support is in active development. This document provides comprehensive details about the `ebpf.plugin`. -For hands-on configuration and troubleshooting tips see our [tutorial on troubleshooting apps with eBPF metrics](/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md). +For hands-on configuration and troubleshooting tips see our [tutorial on troubleshooting apps with eBPF metrics](https://github.com/netdata/netdata/blob/master/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md).
An example of VFS charts, made possible by the eBPF collector plugin @@ -44,12 +44,12 @@ If your Agent is v1.22 or older, you may to enable the collector yourself. To enable or disable the entire eBPF collector: -1. Navigate to the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory). +1. Navigate to the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory). ```bash cd /etc/netdata ``` -2. Use the [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) script to edit `netdata.conf`. +2. Use the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) script to edit `netdata.conf`. ```bash ./edit-config netdata.conf @@ -69,11 +69,11 @@ You can configure the eBPF collector's behavior to fine-tune which metrics you r To edit the `ebpf.d.conf`: -1. Navigate to the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory). +1. Navigate to the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory). ```bash cd /etc/netdata ``` -2. Use the [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) script to edit [`ebpf.d.conf`](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/ebpf.d.conf). +2. Use the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) script to edit [`ebpf.d.conf`](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/ebpf.d.conf). ```bash ./edit-config ebpf.d.conf @@ -105,7 +105,7 @@ accepts the following values: #### Integration with `apps.plugin` The eBPF collector also creates charts for each running application through an integration with the -[`apps.plugin`](/collectors/apps.plugin/README.md). This integration helps you understand how specific applications +[`apps.plugin`](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md). This integration helps you understand how specific applications interact with the Linux kernel. If you want to enable `apps.plugin` integration, change the "apps" setting to "yes". @@ -123,7 +123,7 @@ it runs. #### Integration with `cgroups.plugin` The eBPF collector also creates charts for each cgroup through an integration with the -[`cgroups.plugin`](/collectors/cgroups.plugin/README.md). This integration helps you understand how a specific cgroup +[`cgroups.plugin`](https://github.com/netdata/netdata/blob/master/collectors/cgroups.plugin/README.md). This integration helps you understand how a specific cgroup interacts with the Linux kernel. The integration with `cgroups.plugin` is disabled by default to avoid creating overhead on your system. If you want to @@ -245,7 +245,7 @@ The eBPF collector enables and runs the following eBPF programs by default: You can also enable the following eBPF programs: - `cachestat`: Netdata's eBPF data collector creates charts about the memory page cache. When the integration with - [`apps.plugin`](/collectors/apps.plugin/README.md) is enabled, this collector creates charts for the whole host _and_ + [`apps.plugin`](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md) is enabled, this collector creates charts for the whole host _and_ for each application. - `dcstat` : This eBPF program creates charts that show information about file access using directory cache. It appends `kprobes` for `lookup_fast()` and `d_lookup()` to identify if files are inside directory cache, outside and files are @@ -262,11 +262,11 @@ You can configure each thread of the eBPF data collector. This allows you to ove To configure an eBPF thread: -1. Navigate to the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory). +1. Navigate to the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory). ```bash cd /etc/netdata ``` -2. Use the [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) script to edit a thread configuration file. The following configuration files are available: +2. Use the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) script to edit a thread configuration file. The following configuration files are available: - `network.conf`: Configuration for the [`network` thread](#network-configuration). This config file overwrites the global options and also lets you specify which network the eBPF collector monitors. @@ -305,7 +305,7 @@ You can configure the information shown on `outbound` and `inbound` charts with When you define a `ports` setting, Netdata will collect network metrics for that specific port. For example, if you write `ports = 19999`, Netdata will collect only connections for itself. The `hostnames` setting accepts -[simple patterns](/libnetdata/simple_pattern/README.md). The `ports`, and `ips` settings accept negation (`!`) to deny +[simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). The `ports`, and `ips` settings accept negation (`!`) to deny specific values or asterisk alone to define all values. In the above example, Netdata will collect metrics for all ports between 1 and 443, with the exception of 53 (domain) @@ -882,7 +882,7 @@ significantly increases kernel memory usage by several hundred MB. If your node is experiencing high memory usage and there is no obvious culprit to be found in the `apps.mem` chart, consider testing for high kernel memory usage by [disabling eBPF monitoring](#configuring-ebpfplugin). Next, -[restart Netdata](/docs/configure/start-stop-restart.md) with `sudo systemctl restart netdata` to see if system memory +[restart Netdata](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) with `sudo systemctl restart netdata` to see if system memory usage (see the `system.ram` chart) has dropped significantly. Beginning with `v1.31`, kernel memory usage is configurable via the [`pid table size` setting](#ebpf-load-mode) diff --git a/collectors/plugins.d/README.md b/collectors/plugins.d/README.md index b683d8774d..ab2c232adf 100644 --- a/collectors/plugins.d/README.md +++ b/collectors/plugins.d/README.md @@ -16,18 +16,18 @@ from external processes, thus allowing Netdata to use **external plugins**. |plugin|language|O/S|description| |:----:|:------:|:-:|:----------| -|[apps.plugin](/collectors/apps.plugin/README.md)|`C`|linux, freebsd|monitors the whole process tree on Linux and FreeBSD and breaks down system resource usage by **process**, **user** and **user group**.| -|[charts.d.plugin](/collectors/charts.d.plugin/README.md)|`BASH`|all|a **plugin orchestrator** for data collection modules written in `BASH` v4+.| -|[cups.plugin](/collectors/cups.plugin/README.md)|`C`|all|monitors **CUPS**| +|[apps.plugin](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md)|`C`|linux, freebsd|monitors the whole process tree on Linux and FreeBSD and breaks down system resource usage by **process**, **user** and **user group**.| +|[charts.d.plugin](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/README.md)|`BASH`|all|a **plugin orchestrator** for data collection modules written in `BASH` v4+.| +|[cups.plugin](https://github.com/netdata/netdata/blob/master/collectors/cups.plugin/README.md)|`C`|all|monitors **CUPS**| |[ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md)|`C`|linux|monitors different metrics on environments using kernel internal functions.| |[go.d.plugin](https://github.com/netdata/go.d.plugin/blob/master/README.md)|`GO`|all|collects metrics from the system, applications, or third-party APIs.| -|[ioping.plugin](/collectors/ioping.plugin/README.md)|`C`|all|measures disk latency.| -|[freeipmi.plugin](/collectors/freeipmi.plugin/README.md)|`C`|linux|collects metrics from enterprise hardware sensors, on Linux servers.| -|[nfacct.plugin](/collectors/nfacct.plugin/README.md)|`C`|linux|collects netfilter firewall, connection tracker and accounting metrics using `libmnl` and `libnetfilter_acct`.| -|[xenstat.plugin](/collectors/xenstat.plugin/README.md)|`C`|linux|collects XenServer and XCP-ng metrics using `lxenstat`.| -|[perf.plugin](/collectors/perf.plugin/README.md)|`C`|linux|collects CPU performance metrics using performance monitoring units (PMU).| -|[python.d.plugin](/collectors/python.d.plugin/README.md)|`python`|all|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).| -|[slabinfo.plugin](/collectors/slabinfo.plugin/README.md)|`C`|linux|collects kernel internal cache objects (SLAB) metrics.| +|[ioping.plugin](https://github.com/netdata/netdata/blob/master/collectors/ioping.plugin/README.md)|`C`|all|measures disk latency.| +|[freeipmi.plugin](https://github.com/netdata/netdata/blob/master/collectors/freeipmi.plugin/README.md)|`C`|linux|collects metrics from enterprise hardware sensors, on Linux servers.| +|[nfacct.plugin](https://github.com/netdata/netdata/blob/master/collectors/nfacct.plugin/README.md)|`C`|linux|collects netfilter firewall, connection tracker and accounting metrics using `libmnl` and `libnetfilter_acct`.| +|[xenstat.plugin](https://github.com/netdata/netdata/blob/master/collectors/xenstat.plugin/README.md)|`C`|linux|collects XenServer and XCP-ng metrics using `lxenstat`.| +|[perf.plugin](https://github.com/netdata/netdata/blob/master/collectors/perf.plugin/README.md)|`C`|linux|collects CPU performance metrics using performance monitoring units (PMU).| +|[python.d.plugin](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/README.md)|`python`|all|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).| +|[slabinfo.plugin](https://github.com/netdata/netdata/blob/master/collectors/slabinfo.plugin/README.md)|`C`|linux|collects kernel internal cache objects (SLAB) metrics.| Plugin orchestrators may also be described as **modular plugins**. They are modular since they accept custom made modules to be included. Writing modules for these plugins is easier than accessing the native Netdata API directly. You will find modules already available for each orchestrator under the directory of the particular modular plugin (e.g. under python.d.plugin for the python orchestrator). Each of these modular plugins has each own methods for defining modules. Please check the examples and their documentation. @@ -508,12 +508,12 @@ or do not output the line at all. ## Modular Plugins 1. **python**, use `python.d.plugin`, there are many examples in the [python.d - directory](/collectors/python.d.plugin/README.md) + directory](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/README.md) python is ideal for Netdata plugins. It is a simple, yet powerful way to collect data, it has a very small memory footprint, although it is not the most CPU efficient way to do it. 2. **BASH**, use `charts.d.plugin`, there are many examples in the [charts.d - directory](/collectors/charts.d.plugin/README.md) + directory](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/README.md) BASH is the simplest scripting language for collecting values. It is the less efficient though in terms of CPU resources. You can use it to collect data quickly, but extensive use of it might use a lot of system resources. diff --git a/collectors/proc.plugin/README.md b/collectors/proc.plugin/README.md index fc2ade8777..f035506044 100644 --- a/collectors/proc.plugin/README.md +++ b/collectors/proc.plugin/README.md @@ -404,7 +404,7 @@ You can set the following values for each configuration option: There are several alarms defined in `health.d/net.conf`. -The tricky ones are `inbound packets dropped` and `inbound packets dropped ratio`. They have quite a strict policy so that they warn users about possible issues. These alarms can be annoying for some network configurations. It is especially true for some bonding configurations if an interface is a child or a bonding interface itself. If it is expected to have a certain number of drops on an interface for a certain network configuration, a separate alarm with different triggering thresholds can be created or the existing one can be disabled for this specific interface. It can be done with the help of the [families](/health/REFERENCE.md#alarm-line-families) line in the alarm configuration. For example, if you want to disable the `inbound packets dropped` alarm for `eth0`, set `families: !eth0 *` in the alarm definition for `template: inbound_packets_dropped`. +The tricky ones are `inbound packets dropped` and `inbound packets dropped ratio`. They have quite a strict policy so that they warn users about possible issues. These alarms can be annoying for some network configurations. It is especially true for some bonding configurations if an interface is a child or a bonding interface itself. If it is expected to have a certain number of drops on an interface for a certain network configuration, a separate alarm with different triggering thresholds can be created or the existing one can be disabled for this specific interface. It can be done with the help of the [families](https://github.com/netdata/netdata/blob/master/health/REFERENCE.md#alarm-line-families) line in the alarm configuration. For example, if you want to disable the `inbound packets dropped` alarm for `eth0`, set `families: !eth0 *` in the alarm definition for `template: inbound_packets_dropped`. #### configuration diff --git a/collectors/python.d.plugin/README.md b/collectors/python.d.plugin/README.md index b8ea50b911..b6d658fae2 100644 --- a/collectors/python.d.plugin/README.md +++ b/collectors/python.d.plugin/README.md @@ -90,7 +90,7 @@ plugin](https://raw.githubusercontent.com/netdata/netdata/master/collectors/pyth Netdata (as opposed to having to install Netdata from source again with your new changes) to can copy over the relevant file to where Netdata expects it and then either `sudo systemctl restart netdata` to have it be picked up and used by Netdata or you can just run the updated collector in debug mode by following a process like below (this assumes you have -[installed Netdata from a GitHub fork](https://learn.netdata.cloud/docs/agent/packaging/installer/methods/manual) you +[installed Netdata from a GitHub fork](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md) you have made to do your development on). ```bash @@ -129,7 +129,7 @@ CHART = { ]} ``` -All names are better explained in the [External Plugins](/collectors/plugins.d/README.md) section. +All names are better explained in the [External Plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md) section. Parameters like `priority` and `update_every` are handled by `python.d.plugin`. ### `Service` class @@ -231,7 +231,7 @@ For additional security it uses python `subprocess.Popen` (without `shell=True` _Examples: `apache`, `nginx`, `tomcat`_ -_Multiple Endpoints (urls) Examples: [`rabbitmq`](/collectors/python.d.plugin/rabbitmq/README.md) (simpler). +_Multiple Endpoints (urls) Examples: [`rabbitmq`](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/rabbitmq/README.md) (simpler). _Variables from config file_: `url`, `user`, `pass`. diff --git a/collectors/python.d.plugin/adaptec_raid/README.md b/collectors/python.d.plugin/adaptec_raid/README.md index 41c762e40c..90ef8fa3c0 100644 --- a/collectors/python.d.plugin/adaptec_raid/README.md +++ b/collectors/python.d.plugin/adaptec_raid/README.md @@ -55,7 +55,7 @@ systemctl restart netdata.service ## Enable the collector The `adaptec_raid` collector is disabled by default. To enable it, use `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` file. ```bash @@ -64,12 +64,12 @@ sudo ./edit-config python.d.conf ``` Change the value of the `adaptec_raid` setting to `yes`. Save the file and restart the Netdata Agent with `sudo -systemctl restart netdata`, or the [appropriate method](/docs/configure/start-stop-restart.md) for your system. +systemctl restart netdata`, or the [appropriate method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system. ## Configuration Edit the `python.d/adaptec_raid.conf` configuration file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/alarms/README.md b/collectors/python.d.plugin/alarms/README.md index 00e636a012..4804bd0d74 100644 --- a/collectors/python.d.plugin/alarms/README.md +++ b/collectors/python.d.plugin/alarms/README.md @@ -26,7 +26,7 @@ Below is an example of the chart produced when running `stress-ng --all 2` for a ## Configuration -Enable the collector and [restart Netdata](/docs/configure/start-stop-restart.md). +Enable the collector and [restart Netdata](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md). ```bash cd /etc/netdata/ @@ -36,7 +36,7 @@ sudo systemctl restart netdata ``` If needed, edit the `python.d/alarms.conf` configuration file using `edit-config` from the your agent's [config -directory](/docs/configure/nodes.md), which is usually at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is usually at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/am2320/README.md b/collectors/python.d.plugin/am2320/README.md index b6516c547b..070e8eb38e 100644 --- a/collectors/python.d.plugin/am2320/README.md +++ b/collectors/python.d.plugin/am2320/README.md @@ -24,7 +24,7 @@ It produces the following charts: ## Configuration Edit the `python.d/am2320.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/anomalies/README.md b/collectors/python.d.plugin/anomalies/README.md index 96df9b3db6..7c59275f96 100644 --- a/collectors/python.d.plugin/anomalies/README.md +++ b/collectors/python.d.plugin/anomalies/README.md @@ -11,7 +11,7 @@ learn_rel_path: "References/Collectors references/Misc" # Anomaly detection with Netdata -**Note**: Check out the [Netdata Anomaly Advisor](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor) for a more native anomaly detection experience within Netdata. +**Note**: Check out the [Netdata Anomaly Advisor](https://github.com/netdata/netdata/blob/master/docs/cloud/insights/anomaly-advisor.mdx) for a more native anomaly detection experience within Netdata. This collector uses the Python [PyOD](https://pyod.readthedocs.io/en/latest/index.html) library to perform unsupervised [anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) on your Netdata charts and/or dimensions. @@ -74,7 +74,7 @@ The configuration for the anomalies collector defines how it will behave on your _**Note**: If you are unsure about any of the below configuration options then it's best to just ignore all this and leave the `anomalies.conf` file alone to begin with. Then you can return to it later if you would like to tune things a bit more once the collector is running for a while and you have a feeling for its performance on your node._ Edit the `python.d/anomalies.conf` configuration file using `edit-config` from the your agent's [config -directory](/docs/configure/nodes.md), which is usually at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is usually at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -220,7 +220,7 @@ If you would like to go deeper on what exactly the anomalies collector is doing ## Notes -- Python 3 is required as the [`netdata-pandas`](https://github.com/netdata/netdata-pandas) package uses Python async libraries ([asks](https://pypi.org/project/asks/) and [trio](https://pypi.org/project/trio/)) to make asynchronous calls to the [Netdata REST API](https://learn.netdata.cloud/docs/agent/web/api) to get the required data for each chart. +- Python 3 is required as the [`netdata-pandas`](https://github.com/netdata/netdata-pandas) package uses Python async libraries ([asks](https://pypi.org/project/asks/) and [trio](https://pypi.org/project/trio/)) to make asynchronous calls to the [Netdata REST API](https://github.com/netdata/netdata/blob/master/web/api/README.md) to get the required data for each chart. - Python 3 is also required for the underlying ML libraries of [numba](https://pypi.org/project/numba/), [scikit-learn](https://pypi.org/project/scikit-learn/), and [PyOD](https://pypi.org/project/pyod/). - It may take a few hours or so (depending on your choice of `train_secs_n`) for the collector to 'settle' into it's typical behaviour in terms of the trained models and probabilities you will see in the normal running of your node. - As this collector does most of the work in Python itself, with [PyOD](https://pyod.readthedocs.io/en/latest/) leveraging [numba](https://numba.pydata.org/) under the hood, you may want to try it out first on a test or development system to get a sense of its performance characteristics on a node similar to where you would like to use it. @@ -235,7 +235,7 @@ If you would like to go deeper on what exactly the anomalies collector is doing - If you activate this collector on a fresh node, it might take a little while to build up enough data to calculate a realistic and useful model. - Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relatively high `update_every` maybe 10, 15 or 30 in `anomalies.conf`. - Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisy model if you set `train_n_secs` to be too small for how your node tends to behave. -- If you would like to enable this on a Raspberry Pi, then check out [this guide](https://learn.netdata.cloud/guides/monitor/raspberry-pi-anomaly-detection) which will guide you through first installing LLVM. +- If you would like to enable this on a Raspberry Pi, then check out [this guide](https://github.com/netdata/netdata/blob/master/docs/guides/monitor/raspberry-pi-anomaly-detection.md) which will guide you through first installing LLVM. ## Useful links and further reading diff --git a/collectors/python.d.plugin/beanstalk/README.md b/collectors/python.d.plugin/beanstalk/README.md index 76e9612671..7e7f30de9a 100644 --- a/collectors/python.d.plugin/beanstalk/README.md +++ b/collectors/python.d.plugin/beanstalk/README.md @@ -115,7 +115,7 @@ Provides server and tube-level statistics. ## Configuration Edit the `python.d/beanstalk.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/bind_rndc/README.md b/collectors/python.d.plugin/bind_rndc/README.md index b25be2d50c..e870018841 100644 --- a/collectors/python.d.plugin/bind_rndc/README.md +++ b/collectors/python.d.plugin/bind_rndc/README.md @@ -61,7 +61,7 @@ It produces: ## Configuration Edit the `python.d/bind_rndc.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/boinc/README.md b/collectors/python.d.plugin/boinc/README.md index 16526ad927..149d37ca12 100644 --- a/collectors/python.d.plugin/boinc/README.md +++ b/collectors/python.d.plugin/boinc/README.md @@ -16,7 +16,7 @@ It provides charts tracking the total number of tasks and active tasks, as well ## Configuration Edit the `python.d/boinc.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/ceph/README.md b/collectors/python.d.plugin/ceph/README.md index e06ed1db36..e7d0f51e24 100644 --- a/collectors/python.d.plugin/ceph/README.md +++ b/collectors/python.d.plugin/ceph/README.md @@ -31,7 +31,7 @@ Monitors the ceph cluster usage and consumption data of a server, and produces: ## Configuration Edit the `python.d/ceph.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/changefinder/README.md b/collectors/python.d.plugin/changefinder/README.md index dc0c475ed5..326a69dd5d 100644 --- a/collectors/python.d.plugin/changefinder/README.md +++ b/collectors/python.d.plugin/changefinder/README.md @@ -97,7 +97,7 @@ leave the `changefinder.conf` file alone to begin with. Then you can return to i a bit more once the collector is running for a while and you have a feeling for its performance on your node._ Edit the `python.d/changefinder.conf` configuration file using `edit-config` from the your -agent's [config directory](/docs/configure/nodes.md), which is usually at `/etc/netdata`. +agent's [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is usually at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/dovecot/README.md b/collectors/python.d.plugin/dovecot/README.md index 3a0ebb3526..358f1ba812 100644 --- a/collectors/python.d.plugin/dovecot/README.md +++ b/collectors/python.d.plugin/dovecot/README.md @@ -81,7 +81,7 @@ Module gives information with following charts: ## Configuration Edit the `python.d/dovecot.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/example/README.md b/collectors/python.d.plugin/example/README.md index f7284e7ccb..7e6d2b913e 100644 --- a/collectors/python.d.plugin/example/README.md +++ b/collectors/python.d.plugin/example/README.md @@ -13,6 +13,6 @@ You can add custom data collectors using Python. Netdata provides an [example python data collection module](https://github.com/netdata/netdata/tree/master/collectors/python.d.plugin/example). -If you want to write your own collector, read our [writing a new Python module](/collectors/python.d.plugin/README.md#how-to-write-a-new-module) tutorial. +If you want to write your own collector, read our [writing a new Python module](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/README.md#how-to-write-a-new-module) tutorial. diff --git a/collectors/python.d.plugin/fail2ban/README.md b/collectors/python.d.plugin/fail2ban/README.md index 398c4b8983..6b2c6bba1b 100644 --- a/collectors/python.d.plugin/fail2ban/README.md +++ b/collectors/python.d.plugin/fail2ban/README.md @@ -61,7 +61,7 @@ To persist the changes after rotating the log file, add `create 640 root netdata ## Configuration Edit the `python.d/fail2ban.conf` configuration file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/gearman/README.md b/collectors/python.d.plugin/gearman/README.md index a7f7e7e15f..9ac53cb8e2 100644 --- a/collectors/python.d.plugin/gearman/README.md +++ b/collectors/python.d.plugin/gearman/README.md @@ -30,7 +30,7 @@ It produces: ## Configuration Edit the `python.d/gearman.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/go_expvar/README.md b/collectors/python.d.plugin/go_expvar/README.md index 685844bd08..ff786e7c41 100644 --- a/collectors/python.d.plugin/go_expvar/README.md +++ b/collectors/python.d.plugin/go_expvar/README.md @@ -212,8 +212,8 @@ See [this issue](https://github.com/netdata/netdata/pull/1902#issuecomment-28449 Please see these two links to the official Netdata documentation for more information about the values: -- [External plugins - charts](/collectors/plugins.d/README.md#chart) -- [Chart variables](/collectors/python.d.plugin/README.md#global-variables-order-and-chart) +- [External plugins - charts](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md#chart) +- [Chart variables](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/README.md#global-variables-order-and-chart) **Line definitions** @@ -236,7 +236,7 @@ hidden: False ``` Please see the following link for more information about the options and their default values: -[External plugins - dimensions](/collectors/plugins.d/README.md#dimension) +[External plugins - dimensions](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md#dimension) Apart from top-level expvars, this plugin can also parse expvars stored in a multi-level map; All dicts in the resulting JSON document are then flattened to one level. @@ -258,7 +258,7 @@ the first defined key wins and all subsequent keys with the same name are ignore ## Enable the collector The `go_expvar` collector is disabled by default. To enable it, use `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` file. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` file. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -271,7 +271,7 @@ restart netdata`, or the appropriate method for your system, to finish enabling ## Configuration Edit the `python.d/go_expvar.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/haproxy/README.md b/collectors/python.d.plugin/haproxy/README.md index d1737275de..1aa1a214a8 100644 --- a/collectors/python.d.plugin/haproxy/README.md +++ b/collectors/python.d.plugin/haproxy/README.md @@ -42,7 +42,7 @@ It produces: ## Configuration Edit the `python.d/haproxy.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/hddtemp/README.md b/collectors/python.d.plugin/hddtemp/README.md index 7c298eec0c..6a253b5bfc 100644 --- a/collectors/python.d.plugin/hddtemp/README.md +++ b/collectors/python.d.plugin/hddtemp/README.md @@ -19,7 +19,7 @@ It produces one chart **Temperature** with dynamic number of dimensions (one per ## Configuration Edit the `python.d/hddtemp.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/hpssa/README.md b/collectors/python.d.plugin/hpssa/README.md index 3c9b6e6e70..72dc780324 100644 --- a/collectors/python.d.plugin/hpssa/README.md +++ b/collectors/python.d.plugin/hpssa/README.md @@ -54,7 +54,7 @@ systemctl restart netdata.service ## Enable the collector The `hpssa` collector is disabled by default. To enable it, use `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` file. ```bash @@ -63,12 +63,12 @@ sudo ./edit-config python.d.conf ``` Change the value of the `hpssa` setting to `yes`. Save the file and restart the Netdata Agent with `sudo systemctl -restart netdata`, or the [appropriate method](/docs/configure/start-stop-restart.md) for your system. +restart netdata`, or the [appropriate method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system. ## Configuration Edit the `python.d/hpssa.conf` configuration file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -82,5 +82,5 @@ ssacli_path: /usr/sbin/ssacli ``` Save the file and restart the Netdata Agent with `sudo systemctl restart netdata`, or the [appropriate -method](/docs/configure/start-stop-restart.md) for your system. +method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system. diff --git a/collectors/python.d.plugin/icecast/README.md b/collectors/python.d.plugin/icecast/README.md index 6cd2cc56e0..6fca34ba6e 100644 --- a/collectors/python.d.plugin/icecast/README.md +++ b/collectors/python.d.plugin/icecast/README.md @@ -24,7 +24,7 @@ It produces the following charts: ## Configuration Edit the `python.d/icecast.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/ipfs/README.md b/collectors/python.d.plugin/ipfs/README.md index 51269bf590..8f5e53b100 100644 --- a/collectors/python.d.plugin/ipfs/README.md +++ b/collectors/python.d.plugin/ipfs/README.md @@ -23,7 +23,7 @@ It produces the following charts: ## Configuration Edit the `python.d/ipfs.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/litespeed/README.md b/collectors/python.d.plugin/litespeed/README.md index c8dbbfb3ef..b9bad4635e 100644 --- a/collectors/python.d.plugin/litespeed/README.md +++ b/collectors/python.d.plugin/litespeed/README.md @@ -56,7 +56,7 @@ It produces: ## Configuration Edit the `python.d/litespeed.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/megacli/README.md b/collectors/python.d.plugin/megacli/README.md index ecfd16b3ed..3900de3817 100644 --- a/collectors/python.d.plugin/megacli/README.md +++ b/collectors/python.d.plugin/megacli/README.md @@ -56,7 +56,7 @@ systemctl restart netdata.service ## Enable the collector The `megacli` collector is disabled by default. To enable it, use `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` file. ```bash @@ -70,7 +70,7 @@ with `sudo systemctl restart netdata`, or the appropriate method for your system ## Configuration Edit the `python.d/megacli.conf` configuration file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -84,6 +84,6 @@ do_battery: yes ``` Save the file and restart the Netdata Agent with `sudo systemctl restart netdata`, or the [appropriate -method](/docs/configure/start-stop-restart.md) for your system. +method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system. diff --git a/collectors/python.d.plugin/memcached/README.md b/collectors/python.d.plugin/memcached/README.md index b030b30b27..4158ab19c4 100644 --- a/collectors/python.d.plugin/memcached/README.md +++ b/collectors/python.d.plugin/memcached/README.md @@ -79,7 +79,7 @@ Collects memory-caching system performance metrics. It reads server response to ## Configuration Edit the `python.d/memcached.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/monit/README.md b/collectors/python.d.plugin/monit/README.md index 09747eb2b6..816143ebf2 100644 --- a/collectors/python.d.plugin/monit/README.md +++ b/collectors/python.d.plugin/monit/README.md @@ -9,29 +9,32 @@ learn_rel_path: "References/Collectors references/Storage" # Monit monitoring with Netdata -Monit monitoring module. Data is grabbed from stats XML interface (exists for a long time, but not mentioned in official documentation). Mostly this plugin shows statuses of monit targets, i.e. [statuses of specified checks](https://mmonit.com/monit/documentation/monit.html#Service-checks). +Monit monitoring module. Data is grabbed from stats XML interface (exists for a long time, but not mentioned in official +documentation). Mostly this plugin shows statuses of monit targets, i.e. +[statuses of specified checks](https://mmonit.com/monit/documentation/monit.html#Service-checks). -1. **Filesystems** +1. **Filesystems** - - Filesystems - - Directories - - Files - - Pipes + - Filesystems + - Directories + - Files + - Pipes -2. **Applications** +2. **Applications** - - Processes (+threads/childs) - - Programs + - Processes (+threads/childs) + - Programs -3. **Network** +3. **Network** - - Hosts (+latency) - - Network interfaces + - Hosts (+latency) + - Network interfaces ## Configuration -Edit the `python.d/monit.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Edit the `python.d/monit.conf` configuration file using `edit-config` from the +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically +at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -42,10 +45,10 @@ Sample: ```yaml local: - name : 'local' - url : 'http://localhost:2812' - user: : admin - pass: : monit + name: 'local' + url: 'http://localhost:2812' + user: : admin + pass: : monit ``` If no configuration is given, module will attempt to connect to monit as `http://localhost:2812`. diff --git a/collectors/python.d.plugin/nvidia_smi/README.md b/collectors/python.d.plugin/nvidia_smi/README.md index 77767f51b7..ce5473c260 100644 --- a/collectors/python.d.plugin/nvidia_smi/README.md +++ b/collectors/python.d.plugin/nvidia_smi/README.md @@ -11,7 +11,7 @@ learn_rel_path: "References/Collectors references/Devices" Monitors performance metrics (memory usage, fan speed, pcie bandwidth utilization, temperature, etc.) using `nvidia-smi` cli tool. -> **Warning**: this collector does not work when the Netdata Agent is [running in a container](https://learn.netdata.cloud/docs/agent/packaging/docker). +> **Warning**: this collector does not work when the Netdata Agent is [running in a container](https://github.com/netdata/netdata/blob/master/packaging/docker/README.md). ## Requirements and Notes @@ -51,7 +51,7 @@ It produces the following charts: ## Configuration Edit the `python.d/nvidia_smi.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/openldap/README.md b/collectors/python.d.plugin/openldap/README.md index 64857324b0..4f29bbb49e 100644 --- a/collectors/python.d.plugin/openldap/README.md +++ b/collectors/python.d.plugin/openldap/README.md @@ -59,7 +59,7 @@ Statistics are taken from LDAP monitoring interface. Manual page, slapd-monitor( ## Configuration Edit the `python.d/openldap.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/oracledb/README.md b/collectors/python.d.plugin/oracledb/README.md index 80fd9572d5..78f807d618 100644 --- a/collectors/python.d.plugin/oracledb/README.md +++ b/collectors/python.d.plugin/oracledb/README.md @@ -74,7 +74,7 @@ GRANT SELECT_CATALOG_ROLE TO netdata; ## Configuration Edit the `python.d/oracledb.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/puppet/README.md b/collectors/python.d.plugin/puppet/README.md index 2b41b29974..8b98b8a2de 100644 --- a/collectors/python.d.plugin/puppet/README.md +++ b/collectors/python.d.plugin/puppet/README.md @@ -36,7 +36,7 @@ Following charts are drawn: ## Configuration Edit the `python.d/puppet.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/rabbitmq/README.md b/collectors/python.d.plugin/rabbitmq/README.md index 5fe7f817aa..19df65694d 100644 --- a/collectors/python.d.plugin/rabbitmq/README.md +++ b/collectors/python.d.plugin/rabbitmq/README.md @@ -96,7 +96,7 @@ Per Vhost charts: ## Configuration Edit the `python.d/rabbitmq.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/rethinkdbs/README.md b/collectors/python.d.plugin/rethinkdbs/README.md index 5752760300..578c1c0b16 100644 --- a/collectors/python.d.plugin/rethinkdbs/README.md +++ b/collectors/python.d.plugin/rethinkdbs/README.md @@ -13,27 +13,28 @@ Collects database server and cluster statistics. Following charts are drawn: -1. **Connected Servers** +1. **Connected Servers** - - connected - - missing + - connected + - missing -2. **Active Clients** +2. **Active Clients** - - active + - active -3. **Queries** per second +3. **Queries** per second - - queries + - queries -4. **Documents** per second +4. **Documents** per second - - documents + - documents ## Configuration -Edit the `python.d/rethinkdbs.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Edit the `python.d/rethinkdbs.conf` configuration file using `edit-config` from the +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically +at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -42,11 +43,11 @@ sudo ./edit-config python.d/rethinkdbs.conf ```yaml localhost: - name : 'local' - host : '127.0.0.1' - port : 28015 - user : "user" - password : "pass" + name: 'local' + host: '127.0.0.1' + port: 28015 + user: "user" + password: "pass" ``` When no configuration file is found, module tries to connect to `127.0.0.1:28015`. diff --git a/collectors/python.d.plugin/retroshare/README.md b/collectors/python.d.plugin/retroshare/README.md index c66a5a416c..142b7d5bf9 100644 --- a/collectors/python.d.plugin/retroshare/README.md +++ b/collectors/python.d.plugin/retroshare/README.md @@ -25,7 +25,7 @@ This module produces the following charts: ## Configuration Edit the `python.d/retroshare.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/riakkv/README.md b/collectors/python.d.plugin/riakkv/README.md index 385316ee25..5e533a419a 100644 --- a/collectors/python.d.plugin/riakkv/README.md +++ b/collectors/python.d.plugin/riakkv/README.md @@ -106,7 +106,7 @@ listed ## Configuration Edit the `python.d/riakkv.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/samba/README.md b/collectors/python.d.plugin/samba/README.md index 9ccaacaf02..41ae1c5ba7 100644 --- a/collectors/python.d.plugin/samba/README.md +++ b/collectors/python.d.plugin/samba/README.md @@ -98,7 +98,7 @@ systemctl restart netdata.service ## Enable the collector The `samba` collector is disabled by default. To enable it, use `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`, to edit the `python.d.conf` file. ```bash @@ -107,12 +107,12 @@ sudo ./edit-config python.d.conf ``` Change the value of the `samba` setting to `yes`. Save the file and restart the Netdata Agent with `sudo systemctl -restart netdata`, or the [appropriate method](/docs/configure/start-stop-restart.md) for your system. +restart netdata`, or the [appropriate method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system. ## Configuration Edit the `python.d/samba.conf` configuration file using `edit-config` from the -Netdata [config directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/sensors/README.md b/collectors/python.d.plugin/sensors/README.md index 1f6ca69ca4..f5f4358543 100644 --- a/collectors/python.d.plugin/sensors/README.md +++ b/collectors/python.d.plugin/sensors/README.md @@ -16,7 +16,7 @@ Charts are created dynamically. ## Configuration Edit the `python.d/sensors.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -29,7 +29,7 @@ There have been reports from users that on certain servers, ACPI ring buffer err We are tracking such cases in issue [#827](https://github.com/netdata/netdata/issues/827). Please join this discussion for help. -When `lm-sensors` doesn't work on your device (e.g. for RPi temperatures), use [the legacy bash collector](https://learn.netdata.cloud/docs/agent/collectors/charts.d.plugin/sensors) +When `lm-sensors` doesn't work on your device (e.g. for RPi temperatures), use [the legacy bash collector](https://github.com/netdata/netdata/blob/master/collectors/charts.d.plugin/sensors/README.md) --- diff --git a/collectors/python.d.plugin/smartd_log/README.md b/collectors/python.d.plugin/smartd_log/README.md index 03001ea804..7c1e845f8a 100644 --- a/collectors/python.d.plugin/smartd_log/README.md +++ b/collectors/python.d.plugin/smartd_log/README.md @@ -109,7 +109,7 @@ Otherwise, all the smartd `.csv` files may get written to `/var/lib/smartmontool ## Configuration Edit the `python.d/smartd_log.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/spigotmc/README.md b/collectors/python.d.plugin/spigotmc/README.md index 65448d6193..6d8e4b62ba 100644 --- a/collectors/python.d.plugin/spigotmc/README.md +++ b/collectors/python.d.plugin/spigotmc/README.md @@ -21,7 +21,7 @@ the data returned by the `tps` or `list` console commands. ## Configuration Edit the `python.d/spigotmc.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/squid/README.md b/collectors/python.d.plugin/squid/README.md index 362ea89c09..ac6c83714e 100644 --- a/collectors/python.d.plugin/squid/README.md +++ b/collectors/python.d.plugin/squid/README.md @@ -38,7 +38,7 @@ It produces following charts: ## Configuration Edit the `python.d/squid.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/tomcat/README.md b/collectors/python.d.plugin/tomcat/README.md index dc4662bcf7..66ed6d97a3 100644 --- a/collectors/python.d.plugin/tomcat/README.md +++ b/collectors/python.d.plugin/tomcat/README.md @@ -33,7 +33,7 @@ Charts: ## Configuration Edit the `python.d/tomcat.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/tor/README.md b/collectors/python.d.plugin/tor/README.md index 86444df7ae..c66803766a 100644 --- a/collectors/python.d.plugin/tor/README.md +++ b/collectors/python.d.plugin/tor/README.md @@ -26,7 +26,7 @@ It produces only one chart: ## Configuration Edit the `python.d/tor.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/traefik/README.md b/collectors/python.d.plugin/traefik/README.md index b95fde5ac6..cf30a82a40 100644 --- a/collectors/python.d.plugin/traefik/README.md +++ b/collectors/python.d.plugin/traefik/README.md @@ -13,45 +13,46 @@ Uses the `health` API to provide statistics. It produces: -1. **Responses** by statuses +1. **Responses** by statuses - - success (1xx, 2xx, 304) - - error (5xx) - - redirect (3xx except 304) - - bad (4xx) - - other (all other responses) + - success (1xx, 2xx, 304) + - error (5xx) + - redirect (3xx except 304) + - bad (4xx) + - other (all other responses) -2. **Responses** by codes +2. **Responses** by codes - - 2xx (successful) - - 5xx (internal server errors) - - 3xx (redirect) - - 4xx (bad) - - 1xx (informational) - - other (non-standart responses) + - 2xx (successful) + - 5xx (internal server errors) + - 3xx (redirect) + - 4xx (bad) + - 1xx (informational) + - other (non-standart responses) -3. **Detailed Response Codes** requests/s (number of responses for each response code family individually) +3. **Detailed Response Codes** requests/s (number of responses for each response code family individually) -4. **Requests**/s +4. **Requests**/s - - request statistics + - request statistics -5. **Total response time** +5. **Total response time** - - sum of all response time + - sum of all response time -6. **Average response time** +6. **Average response time** -7. **Average response time per iteration** +7. **Average response time per iteration** -8. **Uptime** +8. **Uptime** - - Traefik server uptime + - Traefik server uptime ## Configuration -Edit the `python.d/traefik.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +Edit the `python.d/traefik.conf` configuration file using `edit-config` from the +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically +at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different @@ -63,11 +64,11 @@ Needs only `url` to server's `health` Here is an example for local server: ```yaml -update_every : 1 -priority : 60000 +update_every: 1 +priority: 60000 local: - url : 'http://localhost:8080/health' + url: 'http://localhost:8080/health' ``` Without configuration, module attempts to connect to `http://localhost:8080/health`. diff --git a/collectors/python.d.plugin/uwsgi/README.md b/collectors/python.d.plugin/uwsgi/README.md index 98b251c137..dcc2dc38e5 100644 --- a/collectors/python.d.plugin/uwsgi/README.md +++ b/collectors/python.d.plugin/uwsgi/README.md @@ -32,7 +32,7 @@ Following charts are drawn: ## Configuration Edit the `python.d/uwsgi.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/varnish/README.md b/collectors/python.d.plugin/varnish/README.md index ba39ccd39a..ebcc00c51c 100644 --- a/collectors/python.d.plugin/varnish/README.md +++ b/collectors/python.d.plugin/varnish/README.md @@ -48,7 +48,7 @@ For every storage (SMF, SMA, or MSE): ## Configuration Edit the `python.d/varnish.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/w1sensor/README.md b/collectors/python.d.plugin/w1sensor/README.md index 4724f7ecf3..12a14a19ab 100644 --- a/collectors/python.d.plugin/w1sensor/README.md +++ b/collectors/python.d.plugin/w1sensor/README.md @@ -19,7 +19,7 @@ Charts are created dynamically based on the number of detected sensors. ## Configuration Edit the `python.d/w1sensor.conf` configuration file using `edit-config` from the Netdata [config -directory](/docs/configure/nodes.md), which is typically at `/etc/netdata`. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`. ```bash cd /etc/netdata # Replace this path with your Netdata config directory, if different diff --git a/collectors/python.d.plugin/zscores/README.md b/collectors/python.d.plugin/zscores/README.md index df644b4ff7..d89aa6a0f4 100644 --- a/collectors/python.d.plugin/zscores/README.md +++ b/collectors/python.d.plugin/zscores/README.md @@ -12,7 +12,7 @@ learn_rel_path: "References/Collectors references/Uncategorized" Smoothed, rolling [Z-Scores](https://en.wikipedia.org/wiki/Standard_score) for selected metrics or charts. -This collector uses the [Netdata rest api](https://learn.netdata.cloud/docs/agent/web/api) to get the `mean` and `stddev` +This collector uses the [Netdata rest api](https://github.com/netdata/netdata/blob/master/web/api/README.md) to get the `mean` and `stddev` for each dimension on specified charts over a time range (defined by `train_secs` and `offset_secs`). For each dimension it will calculate a Z-Score as `z = (x - mean) / stddev` (clipped at `z_clip`). Scores are then smoothed over time (`z_smooth_n`) and, if `mode: 'per_chart'`, aggregated across dimensions to a smoothed, rolling chart level Z-Score diff --git a/collectors/statsd.plugin/README.md b/collectors/statsd.plugin/README.md index aedcb90987..d65476ff4e 100644 --- a/collectors/statsd.plugin/README.md +++ b/collectors/statsd.plugin/README.md @@ -29,11 +29,11 @@ On synthetic charts, we can have alarms as with any metric and chart. - [K6 load testing tool](https://k6.io) - **Description:** k6 is a developer-centric, free and open-source load testing tool built for making performance testing a productive and enjoyable experience. - - [Documentation](/collectors/statsd.plugin/k6.md) + - [Documentation](https://github.com/netdata/netdata/blob/master/collectors/statsd.plugin/k6.md) - [Configuration](https://github.com/netdata/netdata/blob/master/collectors/statsd.plugin/k6.conf) - [Asterisk](https://www.asterisk.org/) - **Description:** Asterisk is an Open Source PBX and telephony toolkit. - - [Documentation](/collectors/statsd.plugin/asterisk.md) + - [Documentation](https://github.com/netdata/netdata/blob/master/collectors/statsd.plugin/asterisk.md) - [Configuration](https://github.com/netdata/netdata/blob/master/collectors/statsd.plugin/asterisk.conf) ## Metrics supported by Netdata @@ -206,7 +206,7 @@ Netdata can visualize StatsD collected metrics in 2 ways: ### Private metric charts -Private charts are controlled with `create private charts for metrics matching = *`. This setting accepts a space-separated list of [simple patterns](/libnetdata/simple_pattern/README.md). Netdata will create private charts for all metrics **by default**. +Private charts are controlled with `create private charts for metrics matching = *`. This setting accepts a space-separated list of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). Netdata will create private charts for all metrics **by default**. For example, to render charts for all `myapp.*` metrics, except `myapp.*.badmetric`, use: @@ -214,7 +214,7 @@ For example, to render charts for all `myapp.*` metrics, except `myapp.*.badmetr create private charts for metrics matching = !myapp.*.badmetric myapp.* ``` -You can specify Netdata StatsD to have a different `memory mode` than the rest of the Netdata Agent. You can read more about `memory mode` in the [documentation](/database/README.md). +You can specify Netdata StatsD to have a different `memory mode` than the rest of the Netdata Agent. You can read more about `memory mode` in the [documentation](https://github.com/netdata/netdata/blob/master/database/README.md). The default behavior is to use the same settings as the rest of the Netdata Agent. If you wish to change them, edit the following settings: - `private charts memory mode` @@ -293,7 +293,7 @@ Synthetic charts are organized in - **charts for each application** aka family in Netdata Dashboard. - **StatsD metrics for each chart** /aka charts and context Netdata Dashboard. -> You can read more about how the Netdata Agent organizes information in the relevant [documentation](/web/README.md) +> You can read more about how the Netdata Agent organizes information in the relevant [documentation](https://github.com/netdata/netdata/blob/master/web/README.md) For each application you need to create a `.conf` file in `/etc/netdata/statsd.d`. @@ -330,7 +330,7 @@ Using the above configuration `myapp` should get its own section on the dashboar `[app]` starts a new application definition. The supported settings in this section are: - `name` defines the name of the app. -- `metrics` is a Netdata [simple pattern](/libnetdata/simple_pattern/README.md). This pattern should match all the possible StatsD metrics that will be participating in the application `myapp`. +- `metrics` is a Netdata [simple pattern](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). This pattern should match all the possible StatsD metrics that will be participating in the application `myapp`. - `private charts = yes|no`, enables or disables private charts for the metrics matched. - `gaps when not collected = yes|no`, enables or disables gaps on the charts of the application in case that no metrics are collected. - `memory mode` sets the memory mode for all charts of the application. The default is the global default for Netdata (not the global default for StatsD private charts). We suggest not to use this (we have commented it out in the example) and let your app use the global default for Netdata, which is our dbengine. @@ -356,7 +356,7 @@ So, the format is this: dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS ``` -`pattern` is a keyword. When set, `METRIC` is expected to be a Netdata [simple pattern](/libnetdata/simple_pattern/README.md) that will be used to match all the StatsD metrics to be added to the chart. So, `pattern` automatically matches any number of StatsD metrics, all of which will be added as separate chart dimensions. +`pattern` is a keyword. When set, `METRIC` is expected to be a Netdata [simple pattern](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) that will be used to match all the StatsD metrics to be added to the chart. So, `pattern` automatically matches any number of StatsD metrics, all of which will be added as separate chart dimensions. `TYPE`, `MULTIPLIER`, `DIVIDER` and `OPTIONS` are optional. diff --git a/collectors/tc.plugin/README.md b/collectors/tc.plugin/README.md index fa9be93db3..bf8655a436 100644 --- a/collectors/tc.plugin/README.md +++ b/collectors/tc.plugin/README.md @@ -71,7 +71,7 @@ QoS is about 2 features: When your system is under a DDoS attack, it will get a lot more bandwidth compared to the one it can handle and probably your applications will crash. Setting a limit on the inbound traffic using QoS, will protect your servers (throttle the requests) and depending on the size of the attack may allow your legitimate users to access the server, while the attack is taking place. - Using QoS together with a [SYNPROXY](/collectors/proc.plugin/README.md) will provide a great degree of protection against most DDoS attacks. Actually when I wrote that article, a few folks tried to DDoS the Netdata demo site to see in real-time the SYNPROXY operation. They did not do it right, but anyway a great deal of requests reached the Netdata server. What saved Netdata was QoS. The Netdata demo server has QoS installed, so the requests were throttled and the server did not even reach the point of resource starvation. Read about it [here](/collectors/proc.plugin/README.md). + Using QoS together with a [SYNPROXY](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md) will provide a great degree of protection against most DDoS attacks. Actually when I wrote that article, a few folks tried to DDoS the Netdata demo site to see in real-time the SYNPROXY operation. They did not do it right, but anyway a great deal of requests reached the Netdata server. What saved Netdata was QoS. The Netdata demo server has QoS installed, so the requests were throttled and the server did not even reach the point of resource starvation. Read about it [here](https://github.com/netdata/netdata/blob/master/collectors/proc.plugin/README.md). On top of all these, QoS is extremely light. You will configure it once, and this is it. It will not bother you again and it will not use any noticeable CPU resources, especially on application and database servers. diff --git a/collectors/timex.plugin/README.md b/collectors/timex.plugin/README.md index e2a48e49ee..ba20207520 100644 --- a/collectors/timex.plugin/README.md +++ b/collectors/timex.plugin/README.md @@ -23,7 +23,7 @@ An unsynchronized clock may indicate a hardware clock error, or an issue with UT ## Configuration -Edit the `netdata.conf` configuration file using [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) from the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory), which is typically at `/etc/netdata`. +Edit the `netdata.conf` configuration file using [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) from the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory), which is typically at `/etc/netdata`. Scroll down to the `[plugin:timex]` section to find the available options: diff --git a/contribution-guidelines.md b/contribution-guidelines.md index 7851f5f8ed..f659298035 100644 --- a/contribution-guidelines.md +++ b/contribution-guidelines.md @@ -2,40 +2,52 @@ Welcome to our docs developer guidelines! -We store documentation related to Netdata inside of the [`netdata/netdata` repository](https://github.com/netdata/netdata) on GitHub. +We store documentation related to Netdata inside of +the [`netdata/netdata` repository](https://github.com/netdata/netdata) on GitHub. The Netdata team aggregates and publishes all documentation at [learn.netdata.cloud](/) using [Docusaurus](https://v2.docusaurus.io/) over at the [`netdata/learn` repository](https://github.com/netdata/learn). ## Before you get started -Anyone interested in contributing to documentation should first read the [Netdata style guide](#styling-guide) further down below and the [Netdata Community Code of Conduct](/contribute/code-of-conduct). +Anyone interested in contributing to documentation should first read the [Netdata style guide](#styling-guide) further +down below and the [Netdata Community Code of Conduct](https://github.com/netdata/.github/blob/main/CODE_OF_CONDUCT.md). -Netdata's documentation uses Markdown syntax. If you're not familiar with Markdown, read the [Mastering Markdown](https://guides.github.com/features/mastering-markdown/) guide from GitHub for the basics on creating paragraphs, styled text, lists, tables, and more, and read further down about some special occasions [while writing in MDX](#mdx-and-markdown). +Netdata's documentation uses Markdown syntax. If you're not familiar with Markdown, read +the [Mastering Markdown](https://guides.github.com/features/mastering-markdown/) guide from GitHub for the basics on +creating paragraphs, styled text, lists, tables, and more, and read further down about some special +occasions [while writing in MDX](#mdx-and-markdown). ### Netdata's Documentation structure Netdata's documentation is separated into 5 categories. -- **Getting Started**: This section’s purpose is to present “What is Netdata” and for whom is it for while also presenting all the ways Netdata can be deployed. That includes Netdata’s platform support, Standalone deployment, Parent-child deployments, deploying on Kubernetes and also deploying on IoT nodes. - - Stored in **WIP** - - Published in **WIP** -- **Concepts**: This section’s purpose is to take a pitch on all the aspects of Netdata. We present the functionality of each component/idea and support it with examples but we don’t go deep into technical details. - - Stored in the `/docs/concepts` directory in the `netdata/netdata` repository. - - Published in **WIP** -- **Tasks**: This section's purpose is to break down any operation into a series of fundamental tasks for the Netdata solution. - - Stored in the `/docs/tasks` directory in the `netdata/netdata` repository. - - Published in **WIP** -- **References**: This section’s purpose is to explain thoroughly every part of Netdata. That covers settings, configurations and so on. - - Stored near the component they refer to. - - Published in **WIP** -- **Collectors References**: This section’s purpose is to explain thoroughly every collector that Netdata supports and it's configuration options. - - Stored in stored near the collector they refer to. - - Published in **WIP** +- **Getting Started**: This section’s purpose is to present “What is Netdata” and for whom is it for while also + presenting all the ways Netdata can be deployed. That includes Netdata’s platform support, Standalone deployment, + Parent-child deployments, deploying on Kubernetes and also deploying on IoT nodes. + - Stored in **WIP** + - Published in **WIP** +- **Concepts**: This section’s purpose is to take a pitch on all the aspects of Netdata. We present the functionality of + each component/idea and support it with examples but we don’t go deep into technical details. + - Stored in the `/docs/concepts` directory in the `netdata/netdata` repository. + - Published in **WIP** +- **Tasks**: This section's purpose is to break down any operation into a series of fundamental tasks for the Netdata + solution. + - Stored in the `/docs/tasks` directory in the `netdata/netdata` repository. + - Published in **WIP** +- **References**: This section’s purpose is to explain thoroughly every part of Netdata. That covers settings, + configurations and so on. + - Stored near the component they refer to. + - Published in **WIP** +- **Collectors References**: This section’s purpose is to explain thoroughly every collector that Netdata supports and + it's configuration options. + - Stored in stored near the collector they refer to. + - Published in **WIP** ## How to contribute -The easiest way to contribute to Netdata's documentation is to edit a file directly on GitHub. This is perfect for small fixes to a single document, such as fixing a typo or clarifying a confusing sentence. +The easiest way to contribute to Netdata's documentation is to edit a file directly on GitHub. This is perfect for small +fixes to a single document, such as fixing a typo or clarifying a confusing sentence. Click on the **Edit this page** button on any published document on [Netdata Learn](https://learn.netdata.cloud). Each page has two of these buttons: One beneath the table of contents, and another at the end of the document, which take you @@ -49,28 +61,39 @@ Jump down to our instructions on [PRs](#making-a-pull-request) for your next ste ### Edit locally -Editing documentation locally is the preferred method for complex changes that span multiple documents or change the documentation's style or structure. +Editing documentation locally is the preferred method for complex changes that span multiple documents or change the +documentation's style or structure. -Create a fork of the Netdata Agent repository by visit the [Netdata repository](https://github.com/netdata/netdata) and clicking on the **Fork** button. +Create a fork of the Netdata Agent repository by visit the [Netdata repository](https://github.com/netdata/netdata) and +clicking on the **Fork** button. -GitHub will ask you where you want to clone the repository. When finished, you end up at the index of your forked Netdata Agent repository. Clone your fork to your local machine: +GitHub will ask you where you want to clone the repository. When finished, you end up at the index of your forked +Netdata Agent repository. Clone your fork to your local machine: ```bash git clone https://github.com/YOUR-GITHUB-USERNAME/netdata.git ``` -Create a new branch using `git checkout -b BRANCH-NAME`. Use your favorite text editor to make your changes, keeping the [Netdata style guide](/contribute/style-guide) in mind. Add, commit, and push changes to your fork. When you're finished, visit the [Netdata Agent Pull requests](https://github.com/netdata/netdata/pulls) to create a new pull request based on the changes you made in the new branch of your fork. +Create a new branch using `git checkout -b BRANCH-NAME`. Use your favorite text editor to make your changes, keeping +the [Netdata style guide](https://github.com/netdata/netdata/blob/master/docs/contributing/style-guide.md) in mind. Add, commit, and push changes to your fork. When you're +finished, visit the [Netdata Agent Pull requests](https://github.com/netdata/netdata/pulls) to create a new pull request +based on the changes you made in the new branch of your fork. ### Making a pull request -Pull requests (PRs) should be concise and informative. See our [PR guidelines](/contribute/handbook#pr-guidelines) for specifics. +Pull requests (PRs) should be concise and informative. See our [PR guidelines](/contribute/handbook#pr-guidelines) for +specifics. -- The title must follow the [imperative mood](https://en.wikipedia.org/wiki/Imperative_mood) and be no more than ~50 characters. -- The description should explain what was changed and why. Verify that you tested any code or processes that you are trying to change. +- The title must follow the [imperative mood](https://en.wikipedia.org/wiki/Imperative_mood) and be no more than ~50 + characters. +- The description should explain what was changed and why. Verify that you tested any code or processes that you are + trying to change. -The Netdata team will review your PR and assesses it for correctness, conciseness, and overall quality. We may point to specific sections and ask for additional information or other fixes. +The Netdata team will review your PR and assesses it for correctness, conciseness, and overall quality. We may point to +specific sections and ask for additional information or other fixes. -After merging your PR, the Netdata team rebuilds the [documentation site](https://learn.netdata.cloud) to publish the changed documentation. +After merging your PR, the Netdata team rebuilds the [documentation site](https://learn.netdata.cloud) to publish the +changed documentation. ## Writing Docs @@ -78,34 +101,43 @@ We have three main types of Docs: **References**, **Concepts** and **Tasks**. ### Metadata Tags - All of the Docs however have what we call "metadata" tags. these help to organize the document upon publishing. So let's go through the different necessary metadata tags to get a document properly published on Learn: - Docusaurus Specific:\ -These metadata tags are parsed automatically by Docusaurus and are rendered in the published document. **Note**: Netdata only uses the Docusaurus metadata tags releveant for our documentation infrastructure. - - `title: "The title of the document"` : Here we specify the title of our document, which is going to be converted to the heading of the published page. - - `description: "The description of the file"`: Here we give a description of what this file is about. - - `custom_edit_url: https://github.com/netdata/netdata/edit/master/collectors/COLLECTORS.md`: Here is an example of the link that the user will be redirected to if he clicks the "Edit this page button", as you see it leads directly to the edit page of the source file. + These metadata tags are parsed automatically by Docusaurus and are rendered in the published document. **Note**: + Netdata only uses the Docusaurus metadata tags releveant for our documentation infrastructure. + - `title: "The title of the document"` : Here we specify the title of our document, which is going to be converted + to the heading of the published page. + - `description: "The description of the file"`: Here we give a description of what this file is about. + - `custom_edit_url: https://github.com/netdata/netdata/edit/master/collectors/COLLECTORS.md`: Here is an example of + the link that the user will be redirected to if he clicks the "Edit this page button", as you see it leads + directly to the edit page of the source file. - Netdata Learn specific: - - `learn_status: "..."` - - The options for this tag are: - - `"published"` - - `"unpublished"` - - `learn_topic_type: "..."` - - The options for this tag are: - - `"Getting Started"` - - `"Concepts"` - - `"Tasks"` - - `"References"` - - `"Collectors References"` - - This is the Topic that the file belongs to, and this is going to resemble the start directory of the file's path on Learn for example if we write `"Concepts"` in the field, then the file is going to be placed under `/Concepts/....` inside Learn. - - `learn_rel_path: "/example/"` - - This tag represents the rest of the path, without the filename in the end, so in this case if the file is a Concept, it would go under `Concepts/example/filename.md`. If you want to place the file under the "root" topic folder, input `"/"`. - - ⚠️ In case any of these "Learn" tags are missing or falsely inputted the file will remain unpublished. This is by design to prevent non-properly tagged files from getting published. + - `learn_status: "..."` + - The options for this tag are: + - `"published"` + - `"unpublished"` + - `learn_topic_type: "..."` + - The options for this tag are: + - `"Getting Started"` + - `"Concepts"` + - `"Tasks"` + - `"References"` + - `"Collectors References"` + - This is the Topic that the file belongs to, and this is going to resemble the start directory of the file's + path on Learn for example if we write `"Concepts"` in the field, then the file is going to be placed + under `/Concepts/....` inside Learn. + - `learn_rel_path: "/example/"` + - This tag represents the rest of the path, without the filename in the end, so in this case if the file is a + Concept, it would go under `Concepts/example/filename.md`. If you want to place the file under the "root" + topic folder, input `"/"`. + - ⚠️ In case any of these "Learn" tags are missing or falsely inputted the file will remain unpublished. This is by + design to prevent non-properly tagged files from getting published. -While Docusaurus can make use of more metadata tags than the above, these are the minimum we require to publish the file on Learn. +While Docusaurus can make use of more metadata tags than the above, these are the minimum we require to publish the file +on Learn. ### Doc Templates @@ -193,10 +225,10 @@ Needs only `url` to server's `server-status?auto`. Here is an example for 2 serv ```yaml jobs: -- name: local - url: http://127.0.0.1/server-status?auto -- name: remote - url: http://203.0.113.10/server-status?auto + - name: local + url: http://127.0.0.1/server-status?auto + - name: remote + url: http://203.0.113.10/server-status?auto ``` For all available options please see @@ -234,7 +266,8 @@ Describe all the information that the user needs to know before proceeding with ## Context -Describe the background information of the Task, the purpose of the Task, and what will the user achieve by completing it. +Describe the background information of the Task, the purpose of the Task, and what will the user achieve by completing +it. ## Steps @@ -268,7 +301,8 @@ The template of the Concept files is: ## Description -In our concepts we have a more loose structure, the goal is to communicate the "concept" to the user, starting with simple language that even a new user can understand, and building from there. +In our concepts we have a more loose structure, the goal is to communicate the "concept" to the user, starting with +simple language that even a new user can understand, and building from there. @@ -335,7 +369,8 @@ Netdata is a global company in every sense, with employees, contributors, and us communicate in a way that is clear and easily understood by everyone. Here are some guidelines, pointers, and questions to be aware of as you write to ensure your writing is universal. Some -of these are expanded into individual sections in the [language, grammar, and mechanics](#language-grammar-and-mechanics) section below. +of these are expanded into individual sections in +the [language, grammar, and mechanics](#language-grammar-and-mechanics) section below. - Would this language make sense to someone who doesn't work here? - Could someone quickly scan this document and understand the material? @@ -364,8 +399,8 @@ of these are expanded into individual sections in the [language, grammar, and me To ensure Netdata's writing is clear, concise, and universal, we have established standards for language, grammar, and certain writing mechanics. However, if you're writing about Netdata for an external publication, such as a guest blog -post, follow that publication's style guide or standards, while keeping the [preferred spelling of Netdata -terms](#netdata-specific-terms) in mind. +post, follow that publication's style guide or standards, while keeping +the [preferred spelling of Netdata terms](#netdata-specific-terms) in mind. ### Active voice @@ -374,7 +409,7 @@ the sentence is action. In passive voice, the subject is acted upon. A famous ex "mistakes were made." | | | -| --------------- | ----------------------------------------------------------------------------------------- | +|-----------------|-------------------------------------------------------------------------------------------| | Not recommended | When an alarm is triggered by a metric, a notification is sent by Netdata. | | **Recommended** | When a metric triggers an alarm, Netdata sends a notification to your preferred endpoint. | @@ -388,16 +423,16 @@ implied, depending on your sentence structure. One valid exception is when a member of the Netdata team or community wants to write about said team or community. | | | -| ------------------------------ | ------------------------------------------------------------ | +|--------------------------------|--------------------------------------------------------------| | Not recommended | To install Netdata, we should try the one-line installer... | | **Recommended** | To install Netdata, you should try the one-line installer... | | **Recommended**, implied "you" | To install Netdata, try the one-line installer... | ### "Easy" or "simple" -Using words that imply the complexity of a task or feature goes against our policy of [universal -communication](#universal-communication). If you claim that a task is easy and the reader struggles to complete it, you -may inadvertently discourage them. +Using words that imply the complexity of a task or feature goes against our policy +of [universal communication](#universal-communication). If you claim that a task is easy and the reader struggles to +complete it, you may inadvertently discourage them. However, if you give users two options and want to relay that one option is genuinely less complex than another, be specific about how and why. @@ -433,7 +468,7 @@ capitalization. In summary: - Capitalize the first word of every new sentence. - Don't use uppercase for emphasis. (Netdata is the BEST!) - Capitalize the names of brands, software, products, and companies according to their official guidelines. (Netdata, - Docker, Apache, NGINX) + Docker, Apache, NGINX) - Avoid camel case (NetData) or all caps (NETDATA). Whenever you refer to the company Netdata, Inc., or the open-source monitoring agent the company develops, capitalize @@ -443,7 +478,7 @@ However, if you are referring to a process, user, or group on a Linux system, us inline code block: `` `netdata` ``. | | | -| --------------- | ---------------------------------------------------------------------------------------------- | +|-----------------|------------------------------------------------------------------------------------------------| | Not recommended | The netdata agent, which spawns the netdata process, is actively maintained by netdata, inc. | | **Recommended** | The Netdata Agent, which spawns the `netdata` process, is actively maintained by Netdata, Inc. | @@ -457,7 +492,7 @@ guidelines. Also, don't put a period (`.`) or colon (`:`) at the end of a title or header. | | | -| --------------- | --------------------------------------------------------------------------------------------------- | +|-----------------|-----------------------------------------------------------------------------------------------------| | Not recommended | Getting Started Guide
Service Discovery and Auto-Detection:
Install netdata with docker | | **Recommended** | Getting started guide
Service discovery and auto-detection
Install Netdata with Docker | @@ -471,7 +506,7 @@ When introducing an abbreviation to a document for the first time, give the read shortened version at the same time. For example: > Use Netdata to monitor Extended Berkeley Packet Filter (eBPF) metrics in real-time. -After you define an abbreviation, don't switch back and forth. Use only the abbreviation for the rest of the document. +> After you define an abbreviation, don't switch back and forth. Use only the abbreviation for the rest of the document. You can also use abbreviations in a document's title to keep the title short and relevant. If you do this, you should still introduce the spelled-out name alongside the abbreviation as soon as possible. @@ -482,7 +517,7 @@ When instructing users to take action, give them the context first. By placing t beginning of the sentence, users can immediately know if they want to read more, follow a link, or skip ahead. | | | -| --------------- | ------------------------------------------------------------------------------ | +|-----------------|--------------------------------------------------------------------------------| | Not recommended | Read the reference guide if you'd like to learn more about custom dashboards. | | **Recommended** | If you'd like to learn more about custom dashboards, read the reference guide. | @@ -492,7 +527,7 @@ The Oxford comma is the comma used after the second-to-last item in a list of th before "and" or "or." | | | -| --------------- | ---------------------------------------------------------------------------- | +|-----------------|------------------------------------------------------------------------------| | Not recommended | Netdata can monitor RAM, disk I/O, MySQL queries per second and lm-sensors. | | **Recommended** | Netdata can monitor RAM, disk I/O, MySQL queries per second, and lm-sensors. | @@ -501,19 +536,19 @@ before "and" or "or." Do not mention future releases or upcoming features in writing unless they have been previously communicated via a public roadmap. -In particular, documentation must describe, as accurately as possible, the Netdata Agent _as of the [latest -commit](https://github.com/netdata/netdata/commits/master) in the GitHub repository_. For Netdata Cloud, documentation -must reflect the *current state* of [production](https://app.netdata.cloud). +In particular, documentation must describe, as accurately as possible, the Netdata Agent _as of +the [latest commit](https://github.com/netdata/netdata/commits/master) in the GitHub repository_. For Netdata Cloud, +documentation must reflect the *current state* of [production](https://app.netdata.cloud). ### Informational links Every link should clearly state its destination. Don't use words like "here" to describe where a link will take your reader. -| | | -| --------------- | ------------------------------------------------------------------------------------------ | -| Not recommended | To install Netdata, click [here](/docs/agent/packaging/installer). | -| **Recommended** | To install Netdata, read the [installation instructions](/docs/agent/packaging/installer). | +| | | +|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------| +| Not recommended | To install Netdata, click [here](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md). | +| **Recommended** | To install Netdata, read the [installation instructions](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md). | Use links as often as required to provide necessary context. Blog posts and guides require less hyperlinks than documentation. See the section on [linking between documentation](#linking-between-documentation) for guidance on the @@ -546,7 +581,7 @@ Use `NODE` instead of an actual or example IP address/hostname when referencing or API endpoint in a browser. | | | -| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Not recommended | Navigate to `http://example.com:19999` in your browser to see Netdata's dashboard.
Navigate to `http://203.0.113.0:19999` in your browser to see Netdata's dashboard. | | **Recommended** | Navigate to `http://NODE:19999` in your browser to see Netdata's dashboard. | @@ -563,16 +598,17 @@ Netdata Agent installation will have commands under the same paths. When applica path, providing a recommendation or instructions on how to view the running configuration, which includes the correct paths. -For example, the [configuration](/docs/configure/nodes) doc first teaches users how to find the Netdata config +For example, the [configuration](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md) doc first +teaches users how to find the Netdata config directory and navigate to it, then runs commands from the `/etc/netdata` path so that the instructions are more universal. Don't include full paths, beginning from the system's root (`/`), as these might not work on certain systems. -| | | -| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Not recommended | Use `edit-config` to edit Netdata's configuration: `sudo /etc/netdata/edit-config netdata.conf`. | -| **Recommended** | Use `edit-config` to edit Netdata's configuration by first navigating to your [Netdata config directory](/docs/configure/nodes#the-netdata-config-directory), which is typically at `/etc/netdata`, then running `sudo edit-config netdata.conf`. | +| | | +|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Not recommended | Use `edit-config` to edit Netdata's configuration: `sudo /etc/netdata/edit-config netdata.conf`. | +| **Recommended** | Use `edit-config` to edit Netdata's configuration by first navigating to your [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory), which is typically at `/etc/netdata`, then running `sudo edit-config netdata.conf`. | ### `sudo` @@ -584,7 +620,7 @@ For example, most users need to use `sudo` with the `edit-config` script, becaus by the `netdata` user. Same goes for restarting the Netdata Agent with `systemctl`. | | | -| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | +|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------| | Not recommended | Run `edit-config netdata.conf` to configure the Netdata Agent.
Run `systemctl restart netdata` to restart the Netdata Agent. | | **Recommended** | Run `sudo edit-config netdata.conf` to configure the Netdata Agent.
Run `sudo systemctl restart netdata` to restart the Netdata Agent. | @@ -615,14 +651,19 @@ If you want to see all the settings, open the ### MDX and markdown While writing in Docusaurus, you might want to take leverage of it's features that are supported in MDX formatted files. -One of those that we use is [Tabs](https://docusaurus.io/docs/next/markdown-features/tabs). They use an HTML syntax, which requires some changes in the way we write markdown inside them. +One of those that we use is [Tabs](https://docusaurus.io/docs/next/markdown-features/tabs). They use an HTML syntax, +which requires some changes in the way we write markdown inside them. In detail: -Due to a bug with docusaurus, we prefer to use `

heading

instead of # H1` so that docusaurus doesn't render the contents of all Tabs on the right hand side, while not being able to navigate them [relative link](https://github.com/facebook/docusaurus/issues/7008). +Due to a bug with docusaurus, we prefer to use `

heading

instead of # H1` so that docusaurus doesn't render the +contents of all Tabs on the right hand side, while not being able to navigate +them [relative link](https://github.com/facebook/docusaurus/issues/7008). You can use markdown syntax for every other styling you want to do except Admonitions: -For admonitions, follow [this](https://docusaurus.io/docs/markdown-features/admonitions#usage-in-jsx) guide to use admonitions inside JSX. While writing in JSX, all the markdown stylings have to be in HTML format to be rendered properly. +For admonitions, follow [this](https://docusaurus.io/docs/markdown-features/admonitions#usage-in-jsx) guide to use +admonitions inside JSX. While writing in JSX, all the markdown stylings have to be in HTML format to be rendered +properly. ### Frontmatter @@ -645,7 +686,7 @@ this case, replace `/docs` with `/img/seo`, and then rebuild the remainder of th the path with `.png`. A member of the Netdata team will assist in creating the image when publishing the content. For example, here is the frontmatter for the guide about [deploying the Netdata Agent with -Ansible](/guides/deploy/ansible). +Ansible](https://github.com/netdata/netdata/blob/master/docs/guides/deploy/ansible.md). ```markdown @@ -82,24 +83,24 @@ sudo tar -xvf /tmp/prometheus-*linux-amd64.tar.gz -C /opt/prometheus --strip=1 We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`. -Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata. +Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata. ```yaml # my global config global: - scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute. + scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute. evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: - monitor: 'codelab-monitor' + monitor: 'codelab-monitor' # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: - # - "first.rules" - # - "second.rules" +# - "first.rules" +# - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. @@ -111,7 +112,7 @@ scrape_configs: # scheme defaults to 'http'. static_configs: - - targets: ['0.0.0.0:9090'] + - targets: [ '0.0.0.0:9090' ] - job_name: 'netdata-scrape' @@ -119,7 +120,7 @@ scrape_configs: params: # format: prometheus | prometheus_all_hosts # You can use `prometheus_all_hosts` if you want Prometheus to set the `instance` to your hostname instead of IP - format: [prometheus] + format: [ prometheus ] # # sources: as-collected | raw | average | sum | volume # default is: average @@ -131,7 +132,7 @@ scrape_configs: honor_labels: true static_configs: - - targets: ['{your.netdata.ip}:19999'] + - targets: [ '{your.netdata.ip}:19999' ] ``` #### Install nodes.yml @@ -207,7 +208,7 @@ sudo systemctl start prometheus sudo systemctl enable prometheus ``` -Prometheus should now start and listen on port 9090. Attempt to head there with your browser. +Prometheus should now start and listen on port 9090. Attempt to head there with your browser. If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the Netdata host as a scraped target. @@ -224,16 +225,16 @@ Before explaining the changes, we have to understand the key differences between Each chart in Netdata has several properties (common to all its metrics): -- `chart_id` - uniquely identifies a chart. +- `chart_id` - uniquely identifies a chart. -- `chart_name` - a more human friendly name for `chart_id`, also unique. +- `chart_name` - a more human friendly name for `chart_id`, also unique. -- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts - have the same context, etc. This is used for alarm templates to match all the charts they should be attached to. +- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts + have the same context, etc. This is used for alarm templates to match all the charts they should be attached to. -- `family` groups a set of charts together. It is used as the submenu of the dashboard. +- `family` groups a set of charts together. It is used as the submenu of the dashboard. -- `units` is the units for all the metrics attached to the chart. +- `units` is the units for all the metrics attached to the chart. #### dimensions @@ -245,44 +246,44 @@ they are both in the same chart). Netdata can send metrics to Prometheus from 3 data sources: -- `as collected` or `raw` - this data source sends the metrics to Prometheus as they are collected. No conversion is - done by Netdata. The latest value for each metric is just given to Prometheus. This is the most preferred method by - Prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how - to get meaningful values out of them. +- `as collected` or `raw` - this data source sends the metrics to Prometheus as they are collected. No conversion is + done by Netdata. The latest value for each metric is just given to Prometheus. This is the most preferred method by + Prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how + to get meaningful values out of them. - The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. + The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. - If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context. + If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context. - Unlike Prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants - (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this - format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}` + Unlike Prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants + (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this + format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}` -- `average` - this data source uses the Netdata database to send the metrics to Prometheus as they are presented on - the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata - dashboard charts. This is the easiest to work with. +- `average` - this data source uses the Netdata database to send the metrics to Prometheus as they are presented on + the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata + dashboard charts. This is the easiest to work with. - The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. + The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. - When this source is used, Netdata keeps track of the last access time for each Prometheus server fetching the - metrics. This last access time is used at the subsequent queries of the same Prometheus server to identify the - time-frame the `average` will be calculated. + When this source is used, Netdata keeps track of the last access time for each Prometheus server fetching the + metrics. This last access time is used at the subsequent queries of the same Prometheus server to identify the + time-frame the `average` will be calculated. - So, no matter how frequently Prometheus scrapes Netdata, it will get all the database data. - To identify each Prometheus server, Netdata uses by default the IP of the client fetching the metrics. - - If there are multiple Prometheus servers fetching data from the same Netdata, using the same IP, each Prometheus - server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the Prometheus server. + So, no matter how frequently Prometheus scrapes Netdata, it will get all the database data. + To identify each Prometheus server, Netdata uses by default the IP of the client fetching the metrics. -- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them. + If there are multiple Prometheus servers fetching data from the same Netdata, using the same IP, each Prometheus + server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the Prometheus server. - The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. All the - other operations are the same with `average`. +- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them. - To change the data source to `sum` or `as-collected` you need to provide the `source` parameter in the request URL. - e.g.: `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected` + The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. All the + other operations are the same with `average`. - Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`. + To change the data source to `sum` or `as-collected` you need to provide the `source` parameter in the request URL. + e.g.: `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected` + + Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`. ### Querying Metrics @@ -369,7 +370,7 @@ functionality of Netdata this ignores any upstream hosts - so you should conside ```yaml metrics_path: '/api/v1/allmetrics' params: - format: [prometheus_all_hosts] + format: [ prometheus_all_hosts ] honor_labels: true ``` @@ -394,7 +395,9 @@ To save bandwidth, and because Prometheus does not use them anyway, `# TYPE` and wanted they can be re-enabled via `types=yes` and `help=yes`, e.g. `/api/v1/allmetrics?format=prometheus&types=yes&help=yes` -Note that if enabled, the `# TYPE` and `# HELP` lines are repeated for every occurrence of a metric, which goes against the Prometheus documentation's [specification for these lines](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#comments-help-text-and-type-information). +Note that if enabled, the `# TYPE` and `# HELP` lines are repeated for every occurrence of a metric, which goes against +the Prometheus +documentation's [specification for these lines](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#comments-help-text-and-type-information). ### Names and IDs @@ -413,8 +416,8 @@ The default is controlled in `exporting.conf`: You can overwrite it from Prometheus, by appending to the URL: -- `&names=no` to get IDs (the old behaviour) -- `&names=yes` to get names +- `&names=no` to get IDs (the old behaviour) +- `&names=yes` to get names ### Filtering metrics sent to Prometheus @@ -425,7 +428,8 @@ Netdata can filter the metrics it sends to Prometheus with this setting: send charts matching = * ``` -This settings accepts a space separated list of [simple patterns](/libnetdata/simple_pattern/README.md) to match the +This settings accepts a space separated list +of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) to match the **charts** to be sent to Prometheus. Each pattern can use `*` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with `!` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is diff --git a/exporting/prometheus/remote_write/README.md b/exporting/prometheus/remote_write/README.md index 22f91237fc..9bda02d49c 100644 --- a/exporting/prometheus/remote_write/README.md +++ b/exporting/prometheus/remote_write/README.md @@ -18,7 +18,7 @@ than 20 external storage providers for long-term archiving and further analysis. To use the Prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage), install [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries. -Next, [reinstall Netdata](/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities +Next, [reinstall Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities are now available. ## Configuration diff --git a/health/README.md b/health/README.md index 09390ba343..460f65680f 100644 --- a/health/README.md +++ b/health/README.md @@ -14,15 +14,13 @@ worked closely with our community of DevOps engineers, SREs, and developers to d alarms that work without any configuration. The Agent's health monitoring system is also dynamic and fully customizable. You can write entirely new alarms, tune the -community-configured alarms for every app/service [the Agent collects metrics from](/collectors/COLLECTORS.md), or +community-configured alarms for every app/service [the Agent collects metrics from](https://github.com/netdata/netdata/blob/master/collectors/COLLECTORS.md), or silence anything you're not interested in. You can even power complex lookups by running statistical algorithms against your metrics. Ready to take the next steps with health monitoring? -[Quickstart](https://github.com/netdata/netdata/edit/master/health/QUICKSTART.md) - -[Configuration reference](https://github.com/netdata/netdata/edit/master/health/REFERENCE.md) +[Configuration reference](https://github.com/netdata/netdata/blob/master/health/REFERENCE.md) ## Guides @@ -30,13 +28,13 @@ Every infrastructure is different, so we're not interested in mandating how you monitoring features. Instead, these guides should give you the details you need to tweak alarms to your heart's content. -[Stopping notifications for individual alarms](https://github.com/netdata/netdata/edit/master/docs/guides/monitor/stop-notifications-alarms.md) +[Stopping notifications for individual alarms](https://github.com/netdata/netdata/blob/master/docs/guides/monitor/stop-notifications-alarms.md) -[Use dimension templates to create dynamic alarms](https://github.com/netdata/netdata/edit/master/docs/guides/monitor/dimension-templates.md) +[Use dimension templates to create dynamic alarms](https://github.com/netdata/netdata/blob/master/docs/guides/monitor/dimension-templates.md) ## Related features -**[Notifications](https://github.com/netdata/netdata/edit/master/health/notifications/README.md)**: Get notified about ongoing alarms from your Agents via your +**[Notifications](https://github.com/netdata/netdata/blob/master/health/notifications/README.md)**: Get notified about ongoing alarms from your Agents via your favorite platform(s), such as Slack, Discord, PagerDuty, email, and much more. diff --git a/health/REFERENCE.md b/health/REFERENCE.md index 8f64953d47..27031cd19c 100644 --- a/health/REFERENCE.md +++ b/health/REFERENCE.md @@ -15,7 +15,7 @@ This guide contains information about editing health configuration files to twea entities that are customized to the needs of your infrastructure. To learn the basics of locating and editing health configuration files, see the [health -quickstart](/health/QUICKSTART.md). +quickstart](https://github.com/netdata/netdata/blob/master/health/QUICKSTART.md). ## Health configuration files @@ -23,7 +23,7 @@ You can configure the Agent's health watchdog service by editing files in two lo - The `[health]` section in `netdata.conf`. By editing the daemon's behavior, you can disable health monitoring altogether, run health checks more or less often, and more. See [daemon - configuration](/daemon/config/README.md#health-section-options) for a table of all the available settings, their + configuration](https://github.com/netdata/netdata/blob/master/daemon/config/README.md#health-section-options) for a table of all the available settings, their default values, and what they control. - The individual `.conf` files in `health.d/`. These health entity files are organized by the type of metric they are performing calculations on or their associated collector. You should edit these files using the `edit-config` @@ -56,7 +56,7 @@ Netdata parses the following lines. Beneath the table is an in-depth explanation - The `every` line is **required** if not using `lookup`. - Each entity **must** have at least one of the following lines: `lookup`, `calc`, `warn`, or `crit`. - A few lines use space-separated lists to define how the entity behaves. You can use `*` as a wildcard or prefix with - `!` for a negative match. Order is important, too! See our [simple patterns docs](/libnetdata/simple_pattern/README.md) for + `!` for a negative match. Order is important, too! See our [simple patterns docs](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) for more examples. - Lines terminated by a `\` are spliced together with the next line. The backslash is removed and the following line is joined with the current one. No space is inserted, so you may split a line anywhere, even in the middle of a word. @@ -240,7 +240,7 @@ hosts: server1 server2 database* !redis3 redis* #### Alarm line `plugin` The `plugin` line filters which plugin within the context this alarm should apply to. The value is a space-separated -list of [simple patterns](/libnetdata/simple_pattern/README.md). For example, +list of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). For example, you can create a filter for an alarm that applies specifically to `python.d.plugin`: ```yaml @@ -254,7 +254,7 @@ comprehensive example using both. #### Alarm line `module` The `module` line filters which module within the context this alarm should apply to. The value is a space-separated -list of [simple patterns](/libnetdata/simple_pattern/README.md). For +list of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). For example, you can create an alarm that applies only on the `isc_dhcpd` module started by `python.d.plugin`: ```yaml @@ -266,7 +266,7 @@ module: isc_dhcpd The `charts` line filters which chart this alarm should apply to. It is only available on entities using the [`template`](#alarm-line-alarm-or-template) line. -The value is a space-separated list of [simple patterns](/libnetdata/simple_pattern/README.md). For +The value is a space-separated list of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). For example, a template that applies to `disk.svctm` (Average Service Time) context, but excludes the disk `sdb` from alarms: ```yaml @@ -280,7 +280,7 @@ template: disk_svctm_alarm The `families` line, used only alongside templates, filters which families within the context this alarm should apply to. The value is a space-separated list. -The value is a space-separate list of simple patterns. See our [simple patterns docs](/libnetdata/simple_pattern/README.md) for +The value is a space-separate list of simple patterns. See our [simple patterns docs](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) for some examples. For example, you can create a template on the `disk.io` context, but filter it to only the `sda` and `sdb` families: @@ -299,7 +299,7 @@ The format is: lookup: METHOD AFTER [at BEFORE] [every DURATION] [OPTIONS] [of DIMENSIONS] [foreach DIMENSIONS] ``` -Everything is the same with [badges](/web/api/badges/README.md). In short: +Everything is the same with [badges](https://github.com/netdata/netdata/blob/master/web/api/badges/README.md). In short: - `METHOD` is one of `average`, `min`, `max`, `sum`, `incremental-sum`. This is required. @@ -316,7 +316,7 @@ Everything is the same with [badges](/web/api/badges/README.md). In short: above too). - `OPTIONS` is a space separated list of `percentage`, `absolute`, `min2max`, `unaligned`, - `match-ids`, `match-names`. Check the [badges](/web/api/badges/README.md) documentation for more info. + `match-ids`, `match-names`. Check the [badges](https://github.com/netdata/netdata/blob/master/web/api/badges/README.md) documentation for more info. - `of DIMENSIONS` is optional and has to be the last parameter. Dimensions have to be separated by `,` or `|`. The space characters found in dimensions will be kept as-is (a few dimensions @@ -503,7 +503,7 @@ good idea to tell Netdata to not clear the notification, by using the `no-clear- #### Alarm line `host labels` -Defines the list of labels present on a host. See our [host labels guide](/docs/guides/using-host-labels.md) for +Defines the list of labels present on a host. See our [host labels guide](https://github.com/netdata/netdata/blob/master/docs/guides/using-host-labels.md) for an explanation of host labels and how to implement them. For example, let's suppose that `netdata.conf` is configured with the following labels: @@ -536,7 +536,7 @@ that will be applied to all hosts installed in the last decade with the followin host labels: installed = 201* ``` -See our [simple patterns docs](/libnetdata/simple_pattern/README.md) for more examples. +See our [simple patterns docs](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) for more examples. #### Alarm line `info` @@ -651,15 +651,15 @@ You can find all the variables that can be used for a given chart, using Agent dashboard. For example, [variables for the `system.cpu` chart of the registry](https://registry.my-netdata.io/api/v1/alarm_variables?chart=system.cpu). -> If you don't know how to find the CHART_NAME, you can read about it [here](/web/README.md#charts). +> If you don't know how to find the CHART_NAME, you can read about it [here](https://github.com/netdata/netdata/blob/master/web/README.md#charts). Netdata supports 3 internal indexes for variables that will be used in health monitoring.
The variables below can be used in both chart alarms and context templates. Although the `alarm_variables` link shows you variables for a particular chart, the same variables can also be used in -templates for charts belonging to a given [context](/web/README.md#contexts). The reason is that all charts of a given -context are essentially identical, with the only difference being the [family](/web/README.md#families) that +templates for charts belonging to a given [context](https://github.com/netdata/netdata/blob/master/web/README.md#contexts). The reason is that all charts of a given +context are essentially identical, with the only difference being the [family](https://github.com/netdata/netdata/blob/master/web/README.md#families) that identifies a particular hardware or software instance. Charts and templates do not apply to specific families anyway, unless if you explicitly limit an alarm with the [alarm line `families`](#alarm-line-families). @@ -999,7 +999,7 @@ The `lookup` line will use the `anomaly_rate` dimension of the `anomaly_detectio ## Troubleshooting -You can compile Netdata with [debugging](/daemon/README.md#debugging) and then set in `netdata.conf`: +You can compile Netdata with [debugging](https://github.com/netdata/netdata/blob/master/daemon/README.md#debugging) and then set in `netdata.conf`: ```yaml [global] @@ -1022,6 +1022,6 @@ expression. It's currently not possible to schedule notifications from within the alarm template. For those scenarios where you need to temporary disable notifications (for instance when running backups triggers a disk alert) you can disable or silence notifications are runtime. The health checks can be controlled at runtime via the [health management -api](/web/api/health/README.md). +api](https://github.com/netdata/netdata/blob/master/web/api/health/README.md). diff --git a/health/notifications/custom/README.md b/health/notifications/custom/README.md index 525503339b..df8f88e403 100644 --- a/health/notifications/custom/README.md +++ b/health/notifications/custom/README.md @@ -13,8 +13,8 @@ learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" Netdata allows you to send custom notifications to any endpoint you choose. To configure custom notifications, you will need to customize `health_alarm_notify.conf`. Open the file for editing -using [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) from the [Netdata config -directory](/docs/configure/nodes.md#the-netdata-config-directory), which is typically at `/etc/netdata`. +using [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) from the [Netdata config +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory), which is typically at `/etc/netdata`. You can look at the other senders in `/usr/libexec/netdata/plugins.d/alarm-notify.sh` for examples of how to modify the `custom_sender()` function in `health_alarm_notify.conf`. diff --git a/health/notifications/gotify/README.md b/health/notifications/gotify/README.md index e6158ccd56..d01502b65a 100644 --- a/health/notifications/gotify/README.md +++ b/health/notifications/gotify/README.md @@ -25,7 +25,7 @@ You can generate a new token in the Gotify Web UI. To set up Gotify in Netdata: 1. Switch to your [config -directory](/docs/configure/nodes.md) and edit the file `health_alarm_notify.conf` using the edit config script. +directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md) and edit the file `health_alarm_notify.conf` using the edit config script. ```bash ./edit-config health_alarm_notify.conf diff --git a/health/notifications/opsgenie/README.md b/health/notifications/opsgenie/README.md index 2ee353003f..20f14b396a 100644 --- a/health/notifications/opsgenie/README.md +++ b/health/notifications/opsgenie/README.md @@ -17,9 +17,9 @@ incidents. The first step is to create a [Netdata integration](https://docs.opsgenie.com/docs/api-integration) in the [Opsgenie](https://www.atlassian.com/software/opsgenie) dashboard. After this, you need to edit -`health_alarm_notify.conf` on your system, by running the following from your [config -directory](/docs/configure/nodes.md): - +`health_alarm_notify.conf` on your system, by running the following from +your [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md): + ```bash ./edit-config health_alarm_notify.conf ``` @@ -60,7 +60,7 @@ message: 2020-09-03 23:07:00: alarm-notify.sh: ERROR: failed to send opsgenie notification for: hades test.chart.test_alarm is CRITICAL, with HTTP error code 401. ``` -You can find more details about the Opsgenie error codes in their [response -docs](https://docs.opsgenie.com/docs/response). +You can find more details about the Opsgenie error codes in +their [response docs](https://docs.opsgenie.com/docs/response). diff --git a/health/notifications/pagerduty/README.md b/health/notifications/pagerduty/README.md index f52578cf38..c6190e83f7 100644 --- a/health/notifications/pagerduty/README.md +++ b/health/notifications/pagerduty/README.md @@ -18,7 +18,7 @@ resolution times. ## What you need to get started -- An installation of the open-source [Netdata](/docs/get-started.mdx) monitoring agent. +- An installation of the open-source [Netdata](https://github.com/netdata/netdata/blob/master/docs/get-started.mdx) monitoring agent. - An installation of the [PagerDuty agent](https://www.pagerduty.com/docs/guides/agent-install-guide/) on the node running Netdata. - A PagerDuty `Generic API` service using either the `Events API v2` or `Events API v1`. @@ -29,8 +29,8 @@ resolution times. to PagerDuty. Click **Use our API directly** and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the **Integrations** tab to find your **Integration Key**. -Navigate to the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory) and use -[`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) to open +Navigate to the [Netdata config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory) and use +[`edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) to open `health_alarm_notify.conf`. ```bash @@ -63,5 +63,5 @@ sudo su -s /bin/bash netdata Aside from the three values set in `health_alarm_notify.conf`, there is no further configuration required to send alert notifications to PagerDuty. -To configure individual alarms, read our [alert configuration](/docs/monitor/configure-alarms.md) doc or -the [health entity reference](/health/REFERENCE.md) doc. +To configure individual alarms, read our [alert configuration](https://github.com/netdata/netdata/blob/master/docs/monitor/configure-alarms.md) doc or +the [health entity reference](https://github.com/netdata/netdata/blob/master/health/REFERENCE.md) doc. diff --git a/health/notifications/stackpulse/README.md b/health/notifications/stackpulse/README.md index da2a084a71..25266e8225 100644 --- a/health/notifications/stackpulse/README.md +++ b/health/notifications/stackpulse/README.md @@ -44,7 +44,7 @@ STACKPULSE_WEBHOOK="https://hooks.stackpulse.io/v1/webhooks/YOUR_UNIQUE_ID" ``` 4. Now restart Netdata using `sudo systemctl restart netdata`, or the [appropriate - method](/docs/configure/start-stop-restart.md) for your system. When your node creates an alarm, you can see the + method](https://github.com/netdata/netdata/blob/master/docs/configure/start-stop-restart.md) for your system. When your node creates an alarm, you can see the associated notification on your StackPulse Administration Portal ## React to alarms with playbooks diff --git a/libnetdata/procfile/README.md b/libnetdata/procfile/README.md index 65638030d4..97861a9113 100644 --- a/libnetdata/procfile/README.md +++ b/libnetdata/procfile/README.md @@ -28,7 +28,7 @@ For each iteration, the caller: - calls `procfile_readall()` to read updated contents. This call also rewinds (`lseek()` to 0) before reading it. - For every file, a [BUFFER](/libnetdata/buffer/README.md) is used that is automatically adjusted to fit the entire + For every file, a [BUFFER](https://github.com/netdata/netdata/blob/master/libnetdata/buffer/README.md) is used that is automatically adjusted to fit the entire file contents of the file. So the file is read with a single `read()` call (providing atomicity / consistency when the data are read from the kernel). diff --git a/ml/README.md b/ml/README.md index a0abdbccdc..7f3ed276bc 100644 --- a/ml/README.md +++ b/ml/README.md @@ -114,7 +114,7 @@ To enable or disable anomaly detection: 2. In the `[ml]` section, set `enabled = yes` to enable or `enabled = no` to disable. 3. Restart netdata (typically `sudo systemctl restart netdata`). -**Note**: If you would like to learn more about configuring Netdata please see [the configuration guide](https://learn.netdata.cloud/guides/step-by-step/step-04). +**Note**: If you would like to learn more about configuring Netdata please see [the configuration guide](https://github.com/netdata/netdata/blob/master/docs/guides/step-by-step/step-04.md). Below is a list of all the available configuration params and their default values. @@ -143,7 +143,7 @@ Below is a list of all the available configuration params and their default valu If you would like to run ML on a parent instead of at the edge, some configuration options are illustrated below. -This example assumes 3 child nodes [streaming](https://learn.netdata.cloud/docs/agent/streaming) to 1 parent node and illustrates the main ways you might want to configure running ML for the children on the parent, running ML on the children themselves, or even a mix of approaches. +This example assumes 3 child nodes [streaming](https://github.com/netdata/netdata/blob/master/streaming/README.md) to 1 parent node and illustrates the main ways you might want to configure running ML for the children on the parent, running ML on the children themselves, or even a mix of approaches. ![parent_child_options](https://user-images.githubusercontent.com/2178292/164439761-8fb7dddd-c4d8-4329-9f44-9a794937a086.png) @@ -265,4 +265,4 @@ The anomaly rate across all dimensions of a node. - Netdata uses [dlib](https://github.com/davisking/dlib) under the hood for its core ML features. - You should benchmark Netdata resource usage before and after enabling ML. Typical overhead ranges from 1-2% additional CPU at most. - The "anomaly bit" has been implemented to be a building block to underpin many more ML based use cases that we plan to deliver soon. -- At its core Netdata uses an approach and problem formulation very similar to the Netdata python [anomalies collector](https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/anomalies), just implemented in a much much more efficient and scalable way in the agent in c++. So if you would like to learn more about the approach and are familiar with Python that is a useful resource to explore, as is the corresponding [deep dive tutorial](https://nbviewer.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb) where the default model used is PCA instead of K-Means but the overall approach and formulation is similar. +- At its core Netdata uses an approach and problem formulation very similar to the Netdata python [anomalies collector](https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/anomalies/README.md), just implemented in a much much more efficient and scalable way in the agent in c++. So if you would like to learn more about the approach and are familiar with Python that is a useful resource to explore, as is the corresponding [deep dive tutorial](https://nbviewer.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb) where the default model used is PCA instead of K-Means but the overall approach and formulation is similar. diff --git a/packaging/PLATFORM_SUPPORT.md b/packaging/PLATFORM_SUPPORT.md index 084672d677..62e208c73f 100644 --- a/packaging/PLATFORM_SUPPORT.md +++ b/packaging/PLATFORM_SUPPORT.md @@ -39,7 +39,8 @@ The following table shows a general outline of the various support tiers and cat | Previously Supported | Users asked to upgrade | None | None | Yes, but only already published versions | Best Effort | - ‘Bug Support’: How we handle of platform-specific bugs. -- ‘Guaranteed Configurations’: Which runtime configurations for the agent we try to guarantee will work with minimal effort from users. +- ‘Guaranteed Configurations’: Which runtime configurations for the agent we try to guarantee will work with minimal + effort from users. - ‘CI Coverage’: What level of coverage we provide for the platform in CI. - ‘Native Packages’: Whether we provide native packages for the system package manager for the platform. - ‘Static Build Support’: How well our static builds are expected to work on the platform. @@ -50,31 +51,32 @@ The following table shows a general outline of the various support tiers and cat Platforms in the core support tier are our top priority. They are covered rigorously in our CI, usually include official binary packages, and any platform-specific bugs receive a high priority. From the perspective -of our developers, platforms in the core support tier _must_ work, with almost no exceptions. Our [static -builds](#static-builds) are expected to work on these platforms if available. Source-based installs are expected +of our developers, platforms in the core support tier _must_ work, with almost no exceptions. +Our [static builds](#static-builds) are expected to work on these platforms if available. Source-based installs are +expected to work on these platforms with minimal user effort. -| Platform | Version | Official Native Packages | Notes | -| -------- | ------- | ------------------------ | ----- | -| Alpine Linux | 3.17 | No | The latest release of Alpine Linux is guaranteed to remain at **Core** tier due to usage for our Docker images | -| Alma Linux | 9.x | x86\_64, AArch64 | Also includes support for Rocky Linux and other ABI compatible RHEL derivatives | -| Alma Linux | 8.x | x86\_64, AArch64 | Also includes support for Rocky Linux and other ABI compatible RHEL derivatives | -| CentOS | 7.x | x86\_64 | | -| Docker | 19.03 or newer | x86\_64, i386, ARMv7, AArch64, POWER8+ | See our [Docker documentation](/packaging/docker/README.md) for more info on using Netdata on Docker | -| Debian | 11.x | x86\_64, i386, ARMv7, AArch64 | | -| Debian | 10.x | x86\_64, i386, ARMv7, AArch64 | | -| Fedora | 37 | x86\_64, AArch64 | | -| Fedora | 36 | x86\_64, AArch64 | | -| openSUSE | Leap 15.4 | x86\_64, AArch64 | | -| Oracle Linux | 9.x | x86\_64, AArch64 | | -| Oracle Linux | 8.x | x86\_64, AArch64 | | -| Red Hat Enterprise Linux | 9.x | x86\_64, AArch64 | | -| Red Hat Enterprise Linux | 8.x | x86\_64, AArch64 | | -| Red Hat Enterprise Linux | 7.x | x86\_64 | | -| Ubuntu | 22.10 | x86\_64, ARMv7, AArch64 | | -| Ubuntu | 22.04 | x86\_64, ARMv7, AArch64 | | -| Ubuntu | 20.04 | x86\_64, ARMv7, AArch64 | | -| Ubuntu | 18.04 | x86\_64, i386, ARMv7, AArch64 | | +| Platform | Version | Official Native Packages | Notes | +|--------------------------|----------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------| +| Alpine Linux | 3.17 | No | The latest release of Alpine Linux is guaranteed to remain at **Core** tier due to usage for our Docker images | +| Alma Linux | 9.x | x86\_64, AArch64 | Also includes support for Rocky Linux and other ABI compatible RHEL derivatives | +| Alma Linux | 8.x | x86\_64, AArch64 | Also includes support for Rocky Linux and other ABI compatible RHEL derivatives | +| CentOS | 7.x | x86\_64 | | +| Docker | 19.03 or newer | x86\_64, i386, ARMv7, AArch64, POWER8+ | See our [Docker documentation](https://github.com/netdata/netdata/blob/master/packaging/docker/README.md) for more info on using Netdata on Docker | +| Debian | 11.x | x86\_64, i386, ARMv7, AArch64 | | +| Debian | 10.x | x86\_64, i386, ARMv7, AArch64 | | +| Fedora | 37 | x86\_64, AArch64 | | +| Fedora | 36 | x86\_64, AArch64 | | +| openSUSE | Leap 15.4 | x86\_64, AArch64 | | +| Oracle Linux | 9.x | x86\_64, AArch64 | | +| Oracle Linux | 8.x | x86\_64, AArch64 | | +| Red Hat Enterprise Linux | 9.x | x86\_64, AArch64 | | +| Red Hat Enterprise Linux | 8.x | x86\_64, AArch64 | | +| Red Hat Enterprise Linux | 7.x | x86\_64 | | +| Ubuntu | 22.10 | x86\_64, ARMv7, AArch64 | | +| Ubuntu | 22.04 | x86\_64, ARMv7, AArch64 | | +| Ubuntu | 20.04 | x86\_64, ARMv7, AArch64 | | +| Ubuntu | 18.04 | x86\_64, i386, ARMv7, AArch64 | | ### Intermediate @@ -85,13 +87,13 @@ platforms that we officially support ourselves to the intermediate tier. Our [st expected to work on these platforms if available. Source-based installs are expected to work on these platforms with minimal user effort. -| Platform | Version | Official Native Packages | Notes | -| -------- | ------- | ------------------------ | ----- | -| Alpine Linux | 3.16 | No | | -| Alpine Linux | 3.15 | No | | -| Alpine Linux | 3.14 | No | | -| Arch Linux | Latest | No | We officially recommend the community packages available for Arch Linux | -| Manjaro Linux | Latest | No | We officially recommend the community packages available for Arch Linux | +| Platform | Version | Official Native Packages | Notes | +|---------------|---------|--------------------------|-------------------------------------------------------------------------| +| Alpine Linux | 3.16 | No | | +| Alpine Linux | 3.15 | No | | +| Alpine Linux | 3.14 | No | | +| Arch Linux | Latest | No | We officially recommend the community packages available for Arch Linux | +| Manjaro Linux | Latest | No | We officially recommend the community packages available for Arch Linux | ### Community @@ -101,19 +103,19 @@ to add support for a new platform, that platform generally will start in this ti are expected to work on these platforms if available. Source-based installs are usually expected to work on these platforms, but may require some extra effort from users. -| Platform | Version | Official Native Packages | Notes | -| -------- | ------- | ------------------------ | ----- | -| Alpine Linux | Edge | No | | -| Clear Linux | Latest | No | | -| Debian | Sid | No | | -| Fedora | Rawhide | No | | -| FreeBSD | 13-STABLE | No | Netdata is included in the FreeBSD Ports Tree, and this is the recommended installation method on FreeBSD | -| FreeBSD | 12-STABLE | No | Netdata is included in the FreeBSD Ports Tree, and this is the recommended installation method on FreeBSD | -| Gentoo | Latest | No | | -| macOS | 12 | No | Currently only works for Intel-based hardware. Requires Homebrew for dependencies | -| macOS | 11 | No | Currently only works for Intel-based hardware. Requires Homebrew for dependencies. | -| macOS | 10.15 | No | Requires Homebrew for dependencies. | -| openSUSE | Tumbleweed | No | | +| Platform | Version | Official Native Packages | Notes | +|--------------|------------|--------------------------|-----------------------------------------------------------------------------------------------------------| +| Alpine Linux | Edge | No | | +| Clear Linux | Latest | No | | +| Debian | Sid | No | | +| Fedora | Rawhide | No | | +| FreeBSD | 13-STABLE | No | Netdata is included in the FreeBSD Ports Tree, and this is the recommended installation method on FreeBSD | +| FreeBSD | 12-STABLE | No | Netdata is included in the FreeBSD Ports Tree, and this is the recommended installation method on FreeBSD | +| Gentoo | Latest | No | | +| macOS | 12 | No | Currently only works for Intel-based hardware. Requires Homebrew for dependencies | +| macOS | 11 | No | Currently only works for Intel-based hardware. Requires Homebrew for dependencies. | +| macOS | 10.15 | No | Requires Homebrew for dependencies. | +| openSUSE | Tumbleweed | No | | ## Third-party supported platforms @@ -142,22 +144,22 @@ Platforms that meet these criteria will be immediately transitioned to the **Pre with no prior warning from Netdata and no deprecation notice, unlike those being dropped for technical reasons, as our end of support should already coincide with the end of the normal support lifecycle for that platform. -On occasion, we may also drop support for a platform due to technical limitations. In such cases, this will be +On occasion, we may also drop support for a platform due to technical limitations. In such cases, this will be announced in the release notes of the next stable release with a deprecation notice. The platform will be supported for _that release_, and will be removed from nightlies some time before the next release after that one. This is a list of platforms that we have supported in the recent past but no longer officially support: -| Platform | Version | Notes | -| -------- | ------- | ----- | -| Alpine Linux | 3.13 | EOL as of 2022-11-01 | -| Alpine Linux | 3.12 | EOL as of 2022-05-01 | -| Debian | 9.x | EOL as of 2022-06-30 | -| Fedora | 35 | EOL as of 2022-12-13 | -| Fedora | 34 | EOL as of 2022-06-07 | -| openSUSE | Leap 15.3 | EOL as of 2022-12-01 | -| Ubuntu | 21.10 | EOL as of 2022-07-31 | -| Ubuntu | 21.04 | EOL as of 2022-01-01 | +| Platform | Version | Notes | +|--------------|-----------|----------------------| +| Alpine Linux | 3.13 | EOL as of 2022-11-01 | +| Alpine Linux | 3.12 | EOL as of 2022-05-01 | +| Debian | 9.x | EOL as of 2022-06-30 | +| Fedora | 35 | EOL as of 2022-12-13 | +| Fedora | 34 | EOL as of 2022-06-07 | +| openSUSE | Leap 15.3 | EOL as of 2022-12-01 | +| Ubuntu | 21.10 | EOL as of 2022-07-31 | +| Ubuntu | 21.04 | EOL as of 2022-01-01 | ## Static builds diff --git a/packaging/docker/README.md b/packaging/docker/README.md index 67cde0fc75..aec5723e3f 100644 --- a/packaging/docker/README.md +++ b/packaging/docker/README.md @@ -16,7 +16,7 @@ you get set up quickly, and doesn't install anything permanent on the system, wh See our full list of Docker images at [Docker Hub](https://hub.docker.com/r/netdata/netdata). Starting with v1.30, Netdata collects anonymous usage information by default and sends it to a self-hosted PostHog instance within the Netdata infrastructure. Read -about the information collected, and learn how to-opt, on our [anonymous statistics](/docs/anonymous-statistics.md) +about the information collected, and learn how to-opt, on our [anonymous statistics](https://github.com/netdata/netdata/blob/master/docs/anonymous-statistics.md) page. The usage statistics are _vital_ for us, as we use them to discover bugs and prioritize new features. We thank you for @@ -176,7 +176,7 @@ to restart the container: `docker restart netdata`. ### Host-editable configuration -> **Warning**: [edit-config](/docs/configure/nodes.md#the-netdata-config-directory) script doesn't work when executed on +> **Warning**: [edit-config](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory) script doesn't work when executed on > the host system. If you want to make your container's configuration directory accessible from the host system, you need to use a @@ -322,7 +322,7 @@ your machine from within the container. Please read the following carefully. #### Docker socket proxy (safest option) Deploy a Docker socket proxy that accepts and filters out requests using something like -[HAProxy](/docs/Running-behind-haproxy.md) so that it restricts connections to read-only access to the CONTAINERS +[HAProxy](https://github.com/netdata/netdata/blob/master/docs/Running-behind-haproxy.md) so that it restricts connections to read-only access to the CONTAINERS endpoint. The reason it's safer to expose the socket to the proxy is because Netdata has a TCP port exposed outside the Docker @@ -441,13 +441,13 @@ services: ### Pass command line options to Netdata Since we use an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) directive, you can provide -[Netdata daemon command line options](/daemon/README.md#command-line-options) such as the IP address Netdata will be +[Netdata daemon command line options](https://github.com/netdata/netdata/blob/master/daemon/README.md#command-line-options) such as the IP address Netdata will be running on, using the [command instruction](https://docs.docker.com/engine/reference/builder/#cmd). ## Install the Agent using Docker Compose with SSL/TLS enabled HTTP Proxy For a permanent installation on a public server, you should [secure the Netdata -instance](/docs/netdata-security.md). This section contains an example of how to install Netdata with an SSL +instance](https://github.com/netdata/netdata/blob/master/docs/netdata-security.md). This section contains an example of how to install Netdata with an SSL reverse proxy and basic authentication. You can use the following `docker-compose.yml` and Caddyfile files to run Netdata with Docker. Replace the domains and diff --git a/packaging/installer/README.md b/packaging/installer/README.md index 75a0114e52..90d3b8de2f 100644 --- a/packaging/installer/README.md +++ b/packaging/installer/README.md @@ -24,7 +24,7 @@ packages. We recommend you install Netdata using one of the methods listed below checksum-verified packages. Netdata collects anonymous usage information by default and sends it to our self hosted [PostHog](https://github.com/PostHog/posthog) installation. PostHog is an open source product analytics platform, you can read -about the information collected, and learn how to-opt, on our [anonymous statistics](/docs/anonymous-statistics.md) +about the information collected, and learn how to-opt, on our [anonymous statistics](https://github.com/netdata/netdata/blob/master/docs/anonymous-statistics.md) page. The usage statistics are _vital_ for us, as we use them to discover bugs and prioritize new features. We thank you for @@ -49,17 +49,17 @@ This script will preferentially use native DEB/RPM packages if we provide them f To see more information about this installation script, including how to disable automatic updates, get nightly vs. stable releases, or disable anonymous statistics, see the [`kickstart.sh` method -page](/packaging/installer/methods/kickstart.md). +page](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md). Scroll down for details about [automatic updates](#automatic-updates) or [nightly vs. stable releases](#nightly-vs-stable-releases). ### Post-installation -When you're finished with installation, check out our [single-node](/docs/quickstart/single-node.md) or -[infrastructure](/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. +When you're finished with installation, check out our [single-node](https://github.com/netdata/netdata/blob/master/docs/quickstart/single-node.md) or +[infrastructure](https://github.com/netdata/netdata/blob/master/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. -Or, skip straight to [configuring the Netdata Agent](/docs/configure/nodes.md). +Or, skip straight to [configuring the Netdata Agent](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md). Read through Netdata's [documentation](https://learn.netdata.cloud/docs), which is structured based on actions and solutions, to enable features like health monitoring, alarm notifications, long-term metrics storage, exporting to @@ -68,7 +68,7 @@ external databases, and more. ## Have a different operating system, or want to try another method? Netdata works on many different platforms. To see all supported platforms, check out our [platform support -policy](/packaging/PLATFORM_SUPPORT.md). +policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md). Below, you can find a few additional installation methods, followed by separate instructions for a variety of unique operating systems. @@ -123,7 +123,7 @@ wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/ ``` With automatic updates disabled, you can choose exactly when and how you [update -Netdata](/packaging/installer/UPDATE.md). +Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/UPDATE.md). ### Network usage of Netdata’s automatic updater @@ -182,8 +182,8 @@ man-in-the-middle attacks. ### CentOS 6 and CentOS 8 To install the Agent on certain CentOS and RHEL systems, you must enable non-default repositories, such as EPEL or -PowerTools, to gather hard dependencies. See the [CentOS 6](/packaging/installer/methods/manual.md#centos--rhel-6x) and -[CentOS 8](/packaging/installer/methods/manual.md#centos--rhel-8x) sections for more information. +PowerTools, to gather hard dependencies. See the [CentOS 6](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md#centos--rhel-6x) and +[CentOS 8](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md#centos--rhel-8x) sections for more information. ### Access to file is not permitted @@ -217,6 +217,6 @@ both. Our current build process has some issues when using certain configurations of the `clang` C compiler on Linux. See [the section on `nonrepresentable section on output` -errors](/packaging/installer/methods/manual.md#nonrepresentable-section-on-output-errors) for a workaround. +errors](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md#nonrepresentable-section-on-output-errors) for a workaround. diff --git a/packaging/installer/REINSTALL.md b/packaging/installer/REINSTALL.md index 6446da5221..c24fdee8c2 100644 --- a/packaging/installer/REINSTALL.md +++ b/packaging/installer/REINSTALL.md @@ -18,11 +18,11 @@ Netdata Agent on your node. ### Reinstalling with the same install type Run the one-line installer script with the `--reinstall` parameter to reinstall the Netdata Agent. This will preserve -any [user configuration](/docs/configure/nodes.md) in `netdata.conf` or other files, and will keep the same install +any [user configuration](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md) in `netdata.conf` or other files, and will keep the same install type that was used for the original install. If you used any [optional -parameters](/packaging/installer/methods/kickstart.md#optional-parameters-to-alter-your-installation) during initial +parameters](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md#optional-parameters-to-alter-your-installation) during initial installation, you need to pass them to the script again during reinstallation. If you cannot remember which options you used, read the contents of the `.environment` file and look for a `REINSTALL_OPTIONS` line. This line contains a list of optional parameters. @@ -39,7 +39,7 @@ getting a badly broken installation working again. Unlike the regular `--reinsta different install type than the original install used. If you used any [optional -parameters](/packaging/installer/methods/kickstart.md#optional-parameters-to-alter-your-installation) during initial +parameters](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md#optional-parameters-to-alter-your-installation) during initial installation, you need to pass them to the script again during reinstallation. If you cannot remember which options you used, read the contents of the `.environment` file and look for a `REINSTALL_OPTIONS` line. This line contains a list of optional parameters. @@ -69,8 +69,8 @@ When copying these directories back after the reinstall, you may need to update ## Troubleshooting If you still experience problems with your Netdata Agent installation after following one of these processes, the next -best route is to [uninstall](/packaging/installer/UNINSTALL.md) and then try a fresh installation using the [one-line -installer](/packaging/installer/methods/kickstart.md). +best route is to [uninstall](https://github.com/netdata/netdata/blob/master/packaging/installer/UNINSTALL.md) and then try a fresh installation using the [one-line +installer](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md). You can also post to our [community forums](https://community.netdata.cloud/c/support/13) or create a new [bug report](https://github.com/netdata/netdata/issues/new?assignees=&labels=bug%2Cneeds+triage&template=BUG_REPORT.yml). diff --git a/packaging/installer/UNINSTALL.md b/packaging/installer/UNINSTALL.md index 669af609fb..2ff22f5c64 100644 --- a/packaging/installer/UNINSTALL.md +++ b/packaging/installer/UNINSTALL.md @@ -12,7 +12,7 @@ learn_rel_path: "Installation" > ⚠️ If you're having trouble updating Netdata, moving from one installation method to another, or generally having > issues with your Netdata Agent installation, consider our [**reinstall Netdata** -> doc](/packaging/installer/REINSTALL.md) instead of removing the Netdata Agent entirely. +> doc](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md) instead of removing the Netdata Agent entirely. The recommended method to uninstall Netdata on a system is to use our kickstart installer script with the `--uninstall` option like so: diff --git a/packaging/installer/UPDATE.md b/packaging/installer/UPDATE.md index e8869c6752..9d4289f85c 100644 --- a/packaging/installer/UPDATE.md +++ b/packaging/installer/UPDATE.md @@ -15,7 +15,7 @@ you installed. If you opted out of automatic updates, you need to update your Ne or stable version. You can also [enable or disable automatic updates on an existing install](#control-automatic-updates). > 💡 Looking to reinstall the Netdata Agent to enable a feature, update an Agent that cannot update automatically, or -> troubleshoot an error during the installation process? See our [reinstallation doc](/packaging/installer/REINSTALL.md) +> troubleshoot an error during the installation process? See our [reinstallation doc](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md) > for reinstallation steps. Before you update the Netdata Agent, check to see if your Netdata Agent is already up-to-date by clicking on the update @@ -84,8 +84,8 @@ On such installs, you can update Netdata using your distribution package manager ### If the kickstart script does not work If the above command fails, you can [reinstall -Netdata](/packaging/installer/REINSTALL.md#one-line-installer-script-kickstartsh) to get the latest version. This -also preserves your [configuration](/docs/configure/nodes.md) in `netdata.conf` or other files just like updating +Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md#one-line-installer-script-kickstartsh) to get the latest version. This +also preserves your [configuration](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md) in `netdata.conf` or other files just like updating normally would, though you will need to specify any installation options you used originally again. ## Docker @@ -109,7 +109,7 @@ docker rm netdata ``` You can now re-create your Netdata container using the `docker` command or a `docker-compose.yml` file. See our [Docker -installation instructions](/packaging/docker/README.md#create-a-new-netdata-agent-container) for details. +installation instructions](https://github.com/netdata/netdata/blob/master/packaging/docker/README.md#create-a-new-netdata-agent-container) for details. ## macOS @@ -128,7 +128,7 @@ instructions](#updates-for-most-systems) to update Netdata. ## Manual installation from Git -If you installed [Netdata manually from Git](/packaging/installer/methods/manual.md), you can run that installer again +If you installed [Netdata manually from Git](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md), you can run that installer again to update your agent. First, run our automatic requirements installer, which works on many Linux distributions, to ensure your system has the dependencies necessary for new features. diff --git a/packaging/installer/methods/cloud-providers.md b/packaging/installer/methods/cloud-providers.md index bc5c9aae25..6b8fa6de1f 100644 --- a/packaging/installer/methods/cloud-providers.md +++ b/packaging/installer/methods/cloud-providers.md @@ -8,7 +8,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/packaging/instal Netdata is fully compatible with popular cloud providers like Google Cloud Platform (GCP), Amazon Web Services (AWS), Azure, and others. You can install Netdata on cloud instances to monitor the apps/services running there, or use -multiple instances in a [parent-child streaming](/streaming/README.md) configuration. +multiple instances in a [parent-child streaming](https://github.com/netdata/netdata/blob/master/streaming/README.md) configuration. In some cases, using Netdata on these cloud providers requires unique installation or configuration steps. This page aims to document some of those steps for popular cloud providers. @@ -53,11 +53,11 @@ command from a remote system, and it fails, it's likely that a firewall is block Another option is to put Netdata behind web server, which will proxy requests through standard HTTP/HTTPS ports (80/443), which are likely already open on your instance. We have a number of guides available: -- [Apache](/docs/Running-behind-apache.md) -- [Nginx](/docs/Running-behind-nginx.md) -- [Caddy](/docs/Running-behind-caddy.md) -- [HAProxy](/docs/Running-behind-haproxy.md) -- [lighttpd](/docs/Running-behind-lighttpd.md) +- [Apache](https://github.com/netdata/netdata/blob/master/docs/Running-behind-apache.md) +- [Nginx](https://github.com/netdata/netdata/blob/master/docs/Running-behind-nginx.md) +- [Caddy](https://github.com/netdata/netdata/blob/master/docs/Running-behind-caddy.md) +- [HAProxy](https://github.com/netdata/netdata/blob/master/docs/Running-behind-haproxy.md) +- [lighttpd](https://github.com/netdata/netdata/blob/master/docs/Running-behind-lighttpd.md) The next few sections outline how to add firewall rules to GCP, AWS, and Azure instances. diff --git a/packaging/installer/methods/freebsd.md b/packaging/installer/methods/freebsd.md index d1e3af2699..ea7099b367 100644 --- a/packaging/installer/methods/freebsd.md +++ b/packaging/installer/methods/freebsd.md @@ -66,7 +66,7 @@ You can now access the Netdata dashboard by navigating to `http://NODE:19999`, r Starting with v1.30, Netdata collects anonymous usage information by default and sends it to a self hosted PostHog instance within the Netdata infrastructure. To read more about the information collected and how to opt-out, check the [anonymous statistics -page](/docs/anonymous-statistics.md). +page](https://github.com/netdata/netdata/blob/master/docs/anonymous-statistics.md). ## Updating the Agent on FreeBSD If you have not passed the `--auto-update` or `-u` parameter for the installer to enable automatic updating, repeat the last step to update Netdata whenever a new version becomes available. diff --git a/packaging/installer/methods/kickstart.md b/packaging/installer/methods/kickstart.md index cb2e9897c9..5eaa820201 100644 --- a/packaging/installer/methods/kickstart.md +++ b/packaging/installer/methods/kickstart.md @@ -18,8 +18,8 @@ This page covers detailed instructions on using and configuring the automatic on The kickstart script works on all Linux distributions and macOS environments. By default, automatic nightly updates are enabled. If you are installing on macOS, make sure to check the [install documentation for macOS](macos.md) before continuing. -> If you are unsure whether you want nightly or stable releases, read the [installation guide](/packaging/installer/README.md#nightly-vs-stable-releases). -> If you want to turn off [automatic updates](/packaging/installer/README.md#automatic-updates), use the `--no-updates` option. You can find more installation options below. +> If you are unsure whether you want nightly or stable releases, read the [installation guide](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#nightly-vs-stable-releases). +> If you want to turn off [automatic updates](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#automatic-updates), use the `--no-updates` option. You can find more installation options below. To install Netdata, run the following as your normal user: @@ -79,7 +79,7 @@ The `kickstart.sh` script accepts a number of optional parameters to control how - `--reinstall-clean`: Performs an uninstall of Netdata and clean installation. - `--local-build-options`: Specify additional options to pass to the installer code when building locally. Only valid if `--build-only` is also specified. - `--static-install-options`: Specify additional options to pass to the static installer code. Only valid if --static-only is also specified. -- `--prepare-offline-install-source`: Instead of insallling the agent, prepare a directory that can be used to install on another system without needing to download anything. See our [offline installation documentation](/packaging/installer/methods/offline.md) for more info. +- `--prepare-offline-install-source`: Instead of insallling the agent, prepare a directory that can be used to install on another system without needing to download anything. See our [offline installation documentation](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/offline.md) for more info. Additionally, the following environment variables may be used to further customize how the script runs (most users should not need to use special values for any of these): @@ -94,9 +94,9 @@ should not need to use special values for any of these): ### Connect node to Netdata Cloud during installation -The `kickstart.sh` script accepts additional parameters to automatically [connect](/claim/README.md) your node to Netdata Cloud immediately after installation. +The `kickstart.sh` script accepts additional parameters to automatically [connect](https://github.com/netdata/netdata/blob/master/claim/README.md) your node to Netdata Cloud immediately after installation. -> Note: You either need to run the command with root privileges or run it with the user that is running the agent. More details: [Connect an agent without root privileges](/claim/README.md#connect-an-agent-without-root-privileges) section. +> Note: You either need to run the command with root privileges or run it with the user that is running the agent. More details: [Connect an agent without root privileges](https://github.com/netdata/netdata/blob/master/claim/README.md#connect-an-agent-without-root-privileges) section. To automatically claim nodes after installation: @@ -109,7 +109,7 @@ To automatically claim nodes after installation: after the install. - `--claim-rooms`: Specify a comma-separated list of tokens for each War Room this node should appear in. - `--claim-proxy`: Specify a proxy to use when connecting to the cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy. - See [connecting through a proxy](/claim/README.md#connect-through-a-proxy) for details. + See [connecting through a proxy](https://github.com/netdata/netdata/blob/master/claim/README.md#connect-through-a-proxy) for details. - `--claim-url`: Specify a URL to use when connecting to the cloud. Defaults to `https://api.netdata.cloud`. For example: @@ -163,10 +163,10 @@ If the script is valid, this command will return `OK, VALID`. ## What's next? -When you're finished with installation, check out our [single-node](/docs/quickstart/single-node.md) or -[infrastructure](/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. +When you're finished with installation, check out our [single-node](https://github.com/netdata/netdata/blob/master/docs/quickstart/single-node.md) or +[infrastructure](https://github.com/netdata/netdata/blob/master/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. -Or, skip straight to [configuring the Netdata Agent](/docs/configure/nodes.md). +Or, skip straight to [configuring the Netdata Agent](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md). Read through Netdata's [documentation](https://learn.netdata.cloud/docs), which is structured based on actions and solutions, to enable features like health monitoring, alarm notifications, long-term metrics storage, exporting to diff --git a/packaging/installer/methods/kubernetes.md b/packaging/installer/methods/kubernetes.md index 9a137e0e23..142c098b44 100644 --- a/packaging/installer/methods/kubernetes.md +++ b/packaging/installer/methods/kubernetes.md @@ -45,8 +45,8 @@ dashboards available in Netdata Cloud. ## Connect your Kubernetes cluster to Netdata Cloud -To start [Kubernetes monitoring](https://learn.netdata.cloud/docs/cloud/visualize/kubernetes/), you must first -[connect](/claim/README.md) your Kubernetes cluster to [Netdata Cloud](https://app.netdata.cloud). The connection process securely +To start [Kubernetes monitoring](https://github.com/netdata/netdata/blob/master/docs/cloud/visualize/kubernetes.md), you must first +[connect](https://github.com/netdata/netdata/blob/master/claim/README.md) your Kubernetes cluster to [Netdata Cloud](https://app.netdata.cloud). The connection process securely connects your Kubernetes cluster to stream metrics data to Netdata Cloud, enabling Kubernetes-specific visualizations like the health map and time-series composite charts. @@ -184,17 +184,17 @@ helm upgrade netdata netdata/netdata ## What's next? -[Start Kubernetes monitoring](https://learn.netdata.cloud/docs/cloud/visualize/kubernetes/) in Netdata Cloud, which +[Start Kubernetes monitoring](https://github.com/netdata/netdata/blob/master/docs/cloud/visualize/kubernetes.md) in Netdata Cloud, which comes with meaningful visualizations out of the box. Read our guide, [_Kubernetes monitoring with Netdata: Overview and -visualizations_](/docs/guides/monitor/kubernetes-k8s-netdata.md), for a complete walkthrough of Netdata's Kubernetes +visualizations_](https://github.com/netdata/netdata/blob/master/docs/guides/monitor/kubernetes-k8s-netdata.md), for a complete walkthrough of Netdata's Kubernetes monitoring capabilities, including a health map of every container in your infrastructure, aggregated resource utilization metrics, and application metrics. ### Related reference documentation -- [Netdata Cloud · Kubernetes monitoring](https://learn.netdata.cloud/docs/cloud/visualize/kubernetes/) +- [Netdata Cloud · Kubernetes monitoring](https://github.com/netdata/netdata/blob/master/docs/cloud/visualize/kubernetes.md) - [Netdata Helm chart](https://github.com/netdata/helmchart) - [Netdata service discovery](https://github.com/netdata/agent-service-discovery/) diff --git a/packaging/installer/methods/macos.md b/packaging/installer/methods/macos.md index fb18b8c0ce..f80f4c1376 100644 --- a/packaging/installer/methods/macos.md +++ b/packaging/installer/methods/macos.md @@ -10,8 +10,8 @@ learn_rel_path: "Installation" # Install Netdata on macOS Netdata works on macOS, albeit with some limitations. -The number of charts displaying system metrics is limited, but you can use any of Netdata's [external plugins](/collectors/plugins.d/README.md) to monitor any services you might have installed on your macOS system. -You could also use a macOS system as the parent node in a [streaming configuration](/streaming/README.md). +The number of charts displaying system metrics is limited, but you can use any of Netdata's [external plugins](https://github.com/netdata/netdata/blob/master/collectors/plugins.d/README.md) to monitor any services you might have installed on your macOS system. +You could also use a macOS system as the parent node in a [streaming configuration](https://github.com/netdata/netdata/blob/master/streaming/README.md). You can install Netdata in one of the three following ways: @@ -22,12 +22,12 @@ You can install Netdata in one of the three following ways: Each of these installation option requires [Homebrew](https://brew.sh/) for handling dependencies. > The Netdata Homebrew package is community-created and -maintained. -> Community-maintained packages _may_ receive support from Netdata, but are only a best-effort affair. Learn more about [Netdata's platform support policy](/packaging/PLATFORM_SUPPORT.md). +> Community-maintained packages _may_ receive support from Netdata, but are only a best-effort affair. Learn more about [Netdata's platform support policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md). ## Install Netdata with our automatic one-line installation script **Local Netdata Agent installation** -To install Netdata using our automatic [kickstart](/packaging/installer/README.md#automatic-one-line-installation-script) open a new terminal and run: +To install Netdata using our automatic [kickstart](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md#automatic-one-line-installation-script) open a new terminal and run: ```bash curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh @@ -38,16 +38,16 @@ The Netdata Agent is installed under `/usr/local/netdata`. Dependencies are hand -The `kickstart.sh` script accepts additional parameters to automatically [connect](/claim/README.md) your node to Netdata +The `kickstart.sh` script accepts additional parameters to automatically [connect](https://github.com/netdata/netdata/blob/master/claim/README.md) your node to Netdata Cloud immediately after installation. Find the `token` and `rooms` strings by [signing in to Netdata Cloud](https://app.netdata.cloud/sign-in?cloudRoute=/spaces), then clicking on **Connect Nodes** in the [Spaces management -area](https://learn.netdata.cloud/docs/cloud/spaces#manage-spaces). +area](https://github.com/netdata/netdata/blob/master/docs/cloud/cloud.mdx#manage-spaces). - `--claim-token`: Specify a unique claiming token associated with your Space in Netdata Cloud to be used to connect to the node after the install. - `--claim-rooms`: Specify a comma-separated list of tokens for each War Room this node should appear in. - `--claim-proxy`: Specify a proxy to use when connecting to the cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy. - See [connecting through a proxy](/claim/README.md#connect-through-a-proxy) for details. + See [connecting through a proxy](https://github.com/netdata/netdata/blob/master/claim/README.md#connect-through-a-proxy) for details. - `--claim-url`: Specify a URL to use when connecting to the cloud. Defaults to `https://api.netdata.cloud`. For example: @@ -56,7 +56,7 @@ curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/n ``` The Netdata Agent is installed under `/usr/local/netdata` on your machine. Your machine will also show up as a node in your Netdata Cloud. -If you experience issues while claiming your node, follow the steps in our [Troubleshooting](/claim/README.md#troubleshooting) documentation. +If you experience issues while claiming your node, follow the steps in our [Troubleshooting](https://github.com/netdata/netdata/blob/master/claim/README.md#troubleshooting) documentation. ## Install Netdata via Homebrew To install Netdata and all its dependencies, run Homebrew using the following command: @@ -81,7 +81,7 @@ We don't recommend installing Netdata from source on macOS, as it can be difficu ``` 2. Click **Install** on the Software Update popup window that appears. -3. Use the same terminal session to install some of Netdata's prerequisites using Homebrew. If you don't want to use [Netdata Cloud](https://learn.netdata.cloud/docs/cloud/), you can omit `cmake`. +3. Use the same terminal session to install some of Netdata's prerequisites using Homebrew. If you don't want to use [Netdata Cloud](https://github.com/netdata/netdata/blob/master/docs/cloud/cloud.mdx), you can omit `cmake`. ```bash brew install ossp-uuid autoconf automake pkg-config libuv lz4 json-c openssl libtool cmake @@ -106,10 +106,10 @@ We don't recommend installing Netdata from source on macOS, as it can be difficu ## What's next? -When you're finished with installation, check out our [single-node](/docs/quickstart/single-node.md) or -[infrastructure](/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. +When you're finished with installation, check out our [single-node](https://github.com/netdata/netdata/blob/master/docs/quickstart/single-node.md) or +[infrastructure](https://github.com/netdata/netdata/blob/master/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. -Or, skip straight to [configuring the Netdata Agent](/docs/configure/nodes.md). +Or, skip straight to [configuring the Netdata Agent](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md). diff --git a/packaging/installer/methods/manual.md b/packaging/installer/methods/manual.md index 2780fb1820..bce9eba7b5 100644 --- a/packaging/installer/methods/manual.md +++ b/packaging/installer/methods/manual.md @@ -202,7 +202,7 @@ cd netdata - `--dont-start-it`: Prevent the installer from starting Netdata automatically. - `--stable-channel`: Automatically update only on the release of new major versions. - `--nightly-channel`: Automatically update on every new nightly build. -- `--disable-telemetry`: Opt-out of [anonymous statistics](/docs/anonymous-statistics.md) we use to make +- `--disable-telemetry`: Opt-out of [anonymous statistics](https://github.com/netdata/netdata/blob/master/docs/anonymous-statistics.md) we use to make Netdata better. - `--no-updates`: Prevent automatic updates of any kind. - `--reinstall`: If an existing install is detected, reinstall instead of trying to update it. Note that this @@ -214,10 +214,10 @@ cd netdata ### Connect node to Netdata Cloud during installation -Unlike the [`kickstart.sh`](/packaging/installer/methods/kickstart.md), the `netdata-installer.sh` script does -not allow you to automatically [connect](/claim/README.md) your node to Netdata Cloud immediately after installation. +Unlike the [`kickstart.sh`](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md), the `netdata-installer.sh` script does +not allow you to automatically [connect](https://github.com/netdata/netdata/blob/master/claim/README.md) your node to Netdata Cloud immediately after installation. -See the [connect to cloud](/claim/README.md) doc for details on connecting a node with a manual installation of Netdata. +See the [connect to cloud](https://github.com/netdata/netdata/blob/master/claim/README.md) doc for details on connecting a node with a manual installation of Netdata. ### 'nonrepresentable section on output' errors @@ -229,10 +229,10 @@ In most cases, you can do this by running `CC=gcc ./netdata-installer.sh`. ## What's next? -When you're finished with installation, check out our [single-node](/docs/quickstart/single-node.md) or -[infrastructure](/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. +When you're finished with installation, check out our [single-node](https://github.com/netdata/netdata/blob/master/docs/quickstart/single-node.md) or +[infrastructure](https://github.com/netdata/netdata/blob/master/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. -Or, skip straight to [configuring the Netdata Agent](/docs/configure/nodes.md). +Or, skip straight to [configuring the Netdata Agent](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md). Read through Netdata's [documentation](https://learn.netdata.cloud/docs), which is structured based on actions and solutions, to enable features like health monitoring, alarm notifications, long-term metrics storage, exporting to diff --git a/packaging/installer/methods/offline.md b/packaging/installer/methods/offline.md index caf82ba480..e49f1d2e57 100644 --- a/packaging/installer/methods/offline.md +++ b/packaging/installer/methods/offline.md @@ -54,16 +54,16 @@ target system. This can be done in any manner you like, as long as filenames are After copying the files, simply run the `install.sh` script located in the offline install source directory. It accepts all the [same options as the kickstart -script](/packaging/installer/methods/kickstart.md#optional-parameters-to-alter-your-installation) for further +script](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md#optional-parameters-to-alter-your-installation) for further customization of the installation, though it will default to not enabling automatic updates (as they are not supported on offline installs). ## What's next? -When you're finished with installation, check out our [single-node](/docs/quickstart/single-node.md) or -[infrastructure](/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. +When you're finished with installation, check out our [single-node](https://github.com/netdata/netdata/blob/master/docs/quickstart/single-node.md) or +[infrastructure](https://github.com/netdata/netdata/blob/master/docs/quickstart/infrastructure.md) monitoring quickstart guides based on your use case. -Or, skip straight to [configuring the Netdata Agent](/docs/configure/nodes.md). +Or, skip straight to [configuring the Netdata Agent](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md). Read through Netdata's [documentation](https://learn.netdata.cloud/docs), which is structured based on actions and solutions, to enable features like health monitoring, alarm notifications, long-term metrics storage, exporting to diff --git a/packaging/installer/methods/packages.md b/packaging/installer/methods/packages.md index 5144b71123..1355128087 100644 --- a/packaging/installer/methods/packages.md +++ b/packaging/installer/methods/packages.md @@ -11,9 +11,10 @@ learn_rel_path: "Installation" # Installing Netdata using native DEB or RPM packages. For most common Linux distributions that use either DEB or RPM packages, Netdata provides pre-built native packages -for current releases in-line with our [official platform support policy](/packaging/PLATFORM_SUPPORT.md). These -packages will be used by default when attempting to install on a supported platform using our [kickstart.sh -installer script](/packaging/installer/methods/kickstart.md). +for current releases in-line with +our [official platform support policy](https://github.com/netdata/netdata/blob/master/packaging/PLATFORM_SUPPORT.md). +These packages will be used by default when attempting to install on a supported platform using our +[kickstart.sh installer script](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/kickstart.md). When using the kickstart script, you can force usage of native DEB or RPM packages by passing the option `--native-only` when invoking the script. This will cause it to only attempt to use native packages for the install, diff --git a/packaging/installer/methods/source.md b/packaging/installer/methods/source.md index 4ae7ffb28b..ecf35382af 100644 --- a/packaging/installer/methods/source.md +++ b/packaging/installer/methods/source.md @@ -13,7 +13,7 @@ learn_rel_path: "Installation" These instructions are for advanced users and distribution package maintainers. Unless this describes you, you almost certainly want to follow [our guide for manually installing Netdata from a git -checkout](/packaging/installer/methods/manual.md) instead. +checkout](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md) instead. ## Required dependencies diff --git a/packaging/installer/methods/synology.md b/packaging/installer/methods/synology.md index 30ec3035c1..e3602df5e9 100644 --- a/packaging/installer/methods/synology.md +++ b/packaging/installer/methods/synology.md @@ -26,7 +26,7 @@ installations run it as the `netdata` user, you might wish to do the same. This 2. Create a user `netdata` via the Synology user interface. Give it no access to anything and a random password. Assign the user to the `netdata` group. Netdata will chuid to this user when running. 3. Change ownership of the following directories, as defined in [Netdata - Security](/docs/netdata-security.md#security-design): + Security](https://github.com/netdata/netdata/blob/master/docs/netdata-security.md#security-design): ```sh chown -R root:netdata /opt/netdata/usr/share/netdata diff --git a/registry/README.md b/registry/README.md index e01af49530..827eea1399 100644 --- a/registry/README.md +++ b/registry/README.md @@ -71,8 +71,8 @@ in the Netdata registry regardless of whether you sign in or not. ## Who talks to the registry? -Your web browser **only**! If sending this information is against your policies, you can [run your own -registry](#run-your-own-registry) +Your web browser **only**! If sending this information is against your policies, you +can [run your own registry](#run-your-own-registry) Your Netdata servers do not talk to the registry. This is a UML diagram of its operation: @@ -137,7 +137,7 @@ Netdata v1.9+ support limiting access to the registry from given IPs, like this: allow from = * ``` -`allow from` settings are [Netdata simple patterns](/libnetdata/simple_pattern/README.md): string matches that use `*` +`allow from` settings are [Netdata simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md): string matches that use `*` as wildcard (any number of times) and a `!` prefix for a negative match. So: `allow from = !10.1.2.3 10.*` will allow all IPs in `10.*` except `10.1.2.3`. The order is important: left to right, the first positive or negative match is used. @@ -184,7 +184,7 @@ Both files are machine readable text files. Beginning with `v1.30.0`, when the Netdata Agent's web server processes a request, it delivers the `SameSite=none` and `Secure` cookies. If you have problems accessing the local Agent dashboard or Netdata Cloud, disable these -cookies by [editing `netdata.conf`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files): +cookies by [editing `netdata.conf`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files): ```conf [registry] diff --git a/streaming/README.md b/streaming/README.md index 58eb2cc1b2..37d2c261e4 100644 --- a/streaming/README.md +++ b/streaming/README.md @@ -7,7 +7,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/streaming/README Each Netdata node is able to replicate/mirror its database to another Netdata node, by streaming the collected metrics in real-time. This is quite different to [data archiving to third party time-series -databases](/exporting/README.md). +databases](https://github.com/netdata/netdata/blob/master/exporting/README.md). The nodes that send metrics are called **child** nodes, and the nodes that receive metrics are called **parent** nodes. There are also **proxy** nodes, which collect metrics from a child and sends it to a parent. @@ -38,7 +38,7 @@ In a headless setup, the child acts as a plain data collector. It spawns all ext local database and accepting dashboard requests, it streams all metrics to the parent. This setup works great to reduce the memory footprint. Depending on the enabled plugins, memory usage is between 6 MiB and 40 MiB. To reduce the memory usage as much as -possible, refer to the [performance optimization guide](/docs/guides/configure/performance.md). +possible, refer to the [performance optimization guide](https://github.com/netdata/netdata/blob/master/docs/guides/configure/performance.md). ### Database Replication @@ -107,7 +107,7 @@ This also disables the registry (there cannot be a registry without an API). requests from its child nodes. 0 sets no limit, 1 means maximum once every second. If this is set, you may see error log entries "... too busy to accept new streaming request. Will be allowed in X secs". -You can [use](/exporting/README.md#configuration) the exporting engine to configure data archiving to an external database (it archives all databases maintained on +You can [use](https://github.com/netdata/netdata/blob/master/exporting/README.md#configuration) the exporting engine to configure data archiving to an external database (it archives all databases maintained on this host). ### Streaming configuration @@ -198,7 +198,7 @@ You can also use `default memory mode = dbengine` for an API key or `memory mode ##### Allow from -`allow from` settings are [Netdata simple patterns](/libnetdata/simple_pattern/README.md): string matches +`allow from` settings are [Netdata simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md): string matches that use `*` as wildcard (any number of times) and a `!` prefix for a negative match. So: `allow from = !10.1.2.3 10.*` will allow all IPs in `10.*` except `10.1.2.3`. The order is important: left to right, the first positive or negative match is used. @@ -233,7 +233,7 @@ For Netdata v1.9+, streaming can also be monitored via `access.log`. ### Securing streaming communications Netdata does not activate TLS encryption by default. To encrypt streaming connections: -1. On the parent node (receiving node), [enable TLS support](/web/server/README.md#enabling-tls-support). +1. On the parent node (receiving node), [enable TLS support](https://github.com/netdata/netdata/blob/master/web/server/README.md#enabling-tls-support). 2. On the child's `stream.conf`, configure the destination as follows: ``` @@ -602,7 +602,7 @@ this writing, Netdata supports: - json document DBs - all the compatibles to the above (e.g. kairosdb, influxdb, etc) -Check the Netdata [exporting documentation](/docs/export/external-databases.md) for configuring this. +Check the Netdata [exporting documentation](https://github.com/netdata/netdata/blob/master/docs/export/external-databases.md) for configuring this. This is how such a solution will work: @@ -696,7 +696,7 @@ ERROR : STREAM_SENDER[CHILD HOSTNAME] : STREAM child HOSTNAME [send to PARENT HO Chart data needs to be consistent between child and parent nodes. If there are differences between chart data on a parent and a child, such as gaps in metrics collection, it most often means your child's `memory mode` does not match the parent's. To learn more about the different ways Netdata can store metrics, and thus keep chart -data consistent, read our [memory mode documentation](/database/README.md). +data consistent, read our [memory mode documentation](https://github.com/netdata/netdata/blob/master/database/README.md). ### Forbidding access diff --git a/tests/health_mgmtapi/README.md b/tests/health_mgmtapi/README.md index e19b612a52..aa51c0d647 100644 --- a/tests/health_mgmtapi/README.md +++ b/tests/health_mgmtapi/README.md @@ -5,7 +5,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/tests/health_mgm # Health command API tester -The directory `tests/health_cmdapi` contains the test script `health-cmdapi-test.sh` for the [health command API](/web/api/health/README.md). +The directory `tests/health_cmdapi` contains the test script `health-cmdapi-test.sh` for the [health command API](https://github.com/netdata/netdata/blob/master/web/api/health/README.md). The script can be executed with options to prepare the system for the tests, run them and restore the system to its previous state. diff --git a/web/README.md b/web/README.md index 7093ca18f7..eae5793467 100644 --- a/web/README.md +++ b/web/README.md @@ -14,17 +14,17 @@ team and the community, but you can also customize them yourself. There are two primary ways to view Netdata's dashboards: -1. The [local Agent dashboard](/web/gui/README.md) that comes pre-configured with every Netdata installation. You can +1. The [local Agent dashboard](https://github.com/netdata/netdata/blob/master/web/gui/README.md) that comes pre-configured with every Netdata installation. You can see it at `http://NODE:19999`, replacing `NODE` with `localhost`, the hostname of your node, or its IP address. You can customize the contents and colors of the standard dashboard [using - JavaScript](/web/gui/README.md#customizing-the-local-dashboard). + JavaScript](https://github.com/netdata/netdata/blob/master/web/gui/README.md#customizing-the-local-dashboard). 2. The [`dashboard.js` JavaScript library](#dashboardjs), which helps you - [customize the standard dashboards](/web/gui/README.md#customizing-the-local-dashboard) - using JavaScript, or create entirely new [custom dashboards](/web/gui/custom/README.md) or - [Atlassian Confluence dashboards](/web/gui/confluence/README.md). + [customize the standard dashboards](https://github.com/netdata/netdata/blob/master/web/gui/README.md#customizing-the-local-dashboard) + using JavaScript, or create entirely new [custom dashboards](https://github.com/netdata/netdata/blob/master/web/gui/custom/README.md) or + [Atlassian Confluence dashboards](https://github.com/netdata/netdata/blob/master/web/gui/confluence/README.md). -You can also view all the data Netdata collects through the [REST API v1](/web/api/README.md#netdata-rest-api). +You can also view all the data Netdata collects through the [REST API v1](https://github.com/netdata/netdata/blob/master/web/api/README.md#netdata-rest-api). No matter where you use Netdata's charts, you'll want to know how to [use](#using-charts) them. You'll also want to understand how Netdata defines [charts](#charts), [dimensions](#dimensions), [families](#families), and @@ -84,7 +84,7 @@ Netdata organizes metrics into charts, dimensions, families, and contexts. A **chart** is an individual, interactive, always-updating graphic displaying one or more collected/calculated metrics. Charts are generated by -[collectors](/collectors/README.md). +[collectors](https://github.com/netdata/netdata/blob/master/collectors/README.md). Here's the system CPU chart, the first chart displayed on the standard dashboard: @@ -182,7 +182,7 @@ hover over the date above the list of dimensions. A tooltip will appear that shows you two pieces of information: the collector that produces the chart, and the chart's context. -Netdata also uses [contexts for alarm templates](/health/REFERENCE.md#alarm-line-on). You can create an alarm for the +Netdata also uses [contexts for alarm templates](https://github.com/netdata/netdata/blob/master/health/REFERENCE.md#alarm-line-on). You can create an alarm for the `net.packets` context to receive alerts for any chart with that context, no matter which family it's attached to. ## Positive and negative values on charts @@ -215,7 +215,7 @@ all the charts and other visualizations that appear on any Netdata dashboard. You need to put `dashboard.js` on any HTML page that's going to render Netdata charts. -The [custom dashboards documentation](/web/gui/custom/README.md) contains examples of such +The [custom dashboards documentation](https://github.com/netdata/netdata/blob/master/web/gui/custom/README.md) contains examples of such custom HTML pages. ### Generating dashboard.js diff --git a/web/api/badges/README.md b/web/api/badges/README.md index 84409471a4..8f6eca62a0 100644 --- a/web/api/badges/README.md +++ b/web/api/badges/README.md @@ -25,7 +25,7 @@ Similarly, there is [a chart that shows outbound bandwidth per class](http://lon The right one is a **volume** calculation. Netdata calculated the total of the last 86.400 seconds (a day) which gives `kilobits`, then divided it by 8 to make it KB, then by 1024 to make it MB and then by 1024 to make it GB. Calculations like this are quite accurate, since for every value collected, every second, Netdata interpolates it to second boundary using microsecond calculations. -Let's see a few more badge examples (they come from the [Netdata registry](/registry/README.md)): +Let's see a few more badge examples (they come from the [Netdata registry](https://github.com/netdata/netdata/blob/master/registry/README.md)): - **cpu usage of user `root`** (you can pick any user; 100% = 1 core). This will be `green <10%`, `yellow <20%`, `orange <50%`, `blue <100%` (1 core), `red` otherwise (you define thresholds and colors on the URL). diff --git a/web/api/exporters/prometheus/README.md b/web/api/exporters/prometheus/README.md index cf7e2caa8d..1ff86f4e0c 100644 --- a/web/api/exporters/prometheus/README.md +++ b/web/api/exporters/prometheus/README.md @@ -5,6 +5,6 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/web/api/exporter # Prometheus exporter -Read the Prometheus exporter documentation: [Using Netdata with Prometheus](/exporting/prometheus/README.md). +Read the Prometheus exporter documentation: [Using Netdata with Prometheus](https://github.com/netdata/netdata/blob/master/exporting/prometheus/README.md). diff --git a/web/api/formatters/README.md b/web/api/formatters/README.md index 3e67ac6ee2..4c281f0644 100644 --- a/web/api/formatters/README.md +++ b/web/api/formatters/README.md @@ -12,18 +12,18 @@ The following formats are supported: | format|module|content type|description| |:----:|:----:|:----------:|:----------| -| `array`|[ssv](/web/api/formatters/ssv/README.md)|application/json|a JSON array| -| `csv`|[csv](/web/api/formatters/csv/README.md)|text/plain|a text table, comma separated, with a header line (dimension names) and `\r\n` at the end of the lines| -| `csvjsonarray`|[csv](/web/api/formatters/csv/README.md)|application/json|a JSON array, with each row as another array (the first row has the dimension names)| -| `datasource`|[json](/web/api/formatters/json/README.md)|application/json|a Google Visualization Provider `datasource` javascript callback| -| `datatable`|[json](/web/api/formatters/json/README.md)|application/json|a Google `datatable`| -| `html`|[csv](/web/api/formatters/csv/README.md)|text/html|an html table| -| `json`|[json](/web/api/formatters/json/README.md)|application/json|a JSON object| -| `jsonp`|[json](/web/api/formatters/json/README.md)|application/json|a JSONP javascript callback| -| `markdown`|[csv](/web/api/formatters/csv/README.md)|text/plain|a markdown table| -| `ssv`|[ssv](/web/api/formatters/ssv/README.md)|text/plain|a space separated list of values| -| `ssvcomma`|[ssv](/web/api/formatters/ssv/README.md)|text/plain|a comma separated list of values| -| `tsv`|[csv](/web/api/formatters/csv/README.md)|text/plain|a TAB delimited `csv` (MS Excel flavor)| +| `array`|[ssv](https://github.com/netdata/netdata/blob/master/web/api/formatters/ssv/README.md)|application/json|a JSON array| +| `csv`|[csv](https://github.com/netdata/netdata/blob/master/web/api/formatters/csv/README.md)|text/plain|a text table, comma separated, with a header line (dimension names) and `\r\n` at the end of the lines| +| `csvjsonarray`|[csv](https://github.com/netdata/netdata/blob/master/web/api/formatters/csv/README.md)|application/json|a JSON array, with each row as another array (the first row has the dimension names)| +| `datasource`|[json](https://github.com/netdata/netdata/blob/master/web/api/formatters/json/README.md)|application/json|a Google Visualization Provider `datasource` javascript callback| +| `datatable`|[json](https://github.com/netdata/netdata/blob/master/web/api/formatters/json/README.md)|application/json|a Google `datatable`| +| `html`|[csv](https://github.com/netdata/netdata/blob/master/web/api/formatters/csv/README.md)|text/html|an html table| +| `json`|[json](https://github.com/netdata/netdata/blob/master/web/api/formatters/json/README.md)|application/json|a JSON object| +| `jsonp`|[json](https://github.com/netdata/netdata/blob/master/web/api/formatters/json/README.md)|application/json|a JSONP javascript callback| +| `markdown`|[csv](https://github.com/netdata/netdata/blob/master/web/api/formatters/csv/README.md)|text/plain|a markdown table| +| `ssv`|[ssv](https://github.com/netdata/netdata/blob/master/web/api/formatters/ssv/README.md)|text/plain|a space separated list of values| +| `ssvcomma`|[ssv](https://github.com/netdata/netdata/blob/master/web/api/formatters/ssv/README.md)|text/plain|a comma separated list of values| +| `tsv`|[csv](https://github.com/netdata/netdata/blob/master/web/api/formatters/csv/README.md)|text/plain|a TAB delimited `csv` (MS Excel flavor)| For examples of each format, check the relative module documentation. diff --git a/web/api/formatters/csv/README.md b/web/api/formatters/csv/README.md index df7c11efa6..fc5ffec1b8 100644 --- a/web/api/formatters/csv/README.md +++ b/web/api/formatters/csv/README.md @@ -5,7 +5,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/web/api/formatte # CSV formatter -The CSV formatter presents [results of database queries](/web/api/queries/README.md) in the following formats: +The CSV formatter presents [results of database queries](https://github.com/netdata/netdata/blob/master/web/api/queries/README.md) in the following formats: | format|content type|description| | :----:|:----------:|:----------| diff --git a/web/api/formatters/json/README.md b/web/api/formatters/json/README.md index a0f8108e73..75f729adab 100644 --- a/web/api/formatters/json/README.md +++ b/web/api/formatters/json/README.md @@ -5,7 +5,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/web/api/formatte # JSON formatter -The CSV formatter presents [results of database queries](/web/api/queries/README.md) in the following formats: +The CSV formatter presents [results of database queries](https://github.com/netdata/netdata/blob/master/web/api/queries/README.md) in the following formats: | format | content type | description| |:----:|:----------:|:----------| diff --git a/web/api/formatters/ssv/README.md b/web/api/formatters/ssv/README.md index d9e193d66e..4ca2a64caa 100644 --- a/web/api/formatters/ssv/README.md +++ b/web/api/formatters/ssv/README.md @@ -5,7 +5,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/web/api/formatte # SSV formatter -The SSV formatter sums all dimensions in [results of database queries](/web/api/queries/README.md) +The SSV formatter sums all dimensions in [results of database queries](https://github.com/netdata/netdata/blob/master/web/api/queries/README.md) to a single value and returns a list of such values showing how it changes through time. It supports the following formats: diff --git a/web/api/formatters/value/README.md b/web/api/formatters/value/README.md index a51e32de76..5b75ded7cf 100644 --- a/web/api/formatters/value/README.md +++ b/web/api/formatters/value/README.md @@ -5,7 +5,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/web/api/formatte # Value formatter -The Value formatter presents [results of database queries](/web/api/queries/README.md) as a single value. +The Value formatter presents [results of database queries](https://github.com/netdata/netdata/blob/master/web/api/queries/README.md) as a single value. To calculate the single value to be returned, it sums the values of all dimensions. @@ -18,7 +18,7 @@ The Value formatter respects the following API `&options=`: | `min2max` | yes | to return the delta from the minimum value to the maximum value (across dimensions)| The Value formatter is not exposed by the API by itself. -Instead it is used by the [`ssv`](/web/api/formatters/ssv/README.md) formatter -and [health monitoring queries](/health/README.md). +Instead it is used by the [`ssv`](https://github.com/netdata/netdata/blob/master/web/api/formatters/ssv/README.md) formatter +and [health monitoring queries](https://github.com/netdata/netdata/blob/master/health/README.md). diff --git a/web/api/health/README.md b/web/api/health/README.md index 9ec8f31c01..bfdd0ac682 100644 --- a/web/api/health/README.md +++ b/web/api/health/README.md @@ -72,7 +72,7 @@ You can access the API via GET requests, by adding the bearer token to an `Autho curl "http://NODE:19999/api/v1/manage/health?cmd=RESET" -H "X-Auth-Token: Mytoken" ``` -By default access to the health management API is only allowed from `localhost`. Accessing the API from anything else will return a 403 error with the message `You are not allowed to access this resource.`. You can change permissions by editing the `allow management from` variable in `netdata.conf` within the [web] section. See [web server access lists](/web/server/README.md#access-lists) for more information. +By default access to the health management API is only allowed from `localhost`. Accessing the API from anything else will return a 403 error with the message `You are not allowed to access this resource.`. You can change permissions by editing the `allow management from` variable in `netdata.conf` within the [web] section. See [web server access lists](https://github.com/netdata/netdata/blob/master/web/server/README.md#access-lists) for more information. The command `RESET` just returns Netdata to the default operation, with all health checks and notifications enabled. If you've configured and entered your token correctly, you should see the plain text response `All health checks and notifications are enabled`. @@ -126,7 +126,7 @@ curl "http://NODE:19999/api/v1/manage/health?cmd=SILENCE&context=load" -H "X-Aut #### Selection criteria -The `selection criteria` are key/value pairs, in the format `key : value`, where value is a Netdata [simple pattern](/libnetdata/simple_pattern/README.md). This means that you can create very powerful selectors (you will rarely need more than one or two). +The `selection criteria` are key/value pairs, in the format `key : value`, where value is a Netdata [simple pattern](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). This means that you can create very powerful selectors (you will rarely need more than one or two). The accepted keys for the `selection criteria` are the following: @@ -220,6 +220,6 @@ The file's location is configurable in `netdata.conf`. The default is shown belo ### Further reading -The test script under [tests/health_mgmtapi](/tests/health_mgmtapi/README.md) contains a series of tests that you can either run or read through to understand the various calls and responses better. +The test script under [tests/health_mgmtapi](https://github.com/netdata/netdata/blob/master/tests/health_mgmtapi/README.md) contains a series of tests that you can either run or read through to understand the various calls and responses better. diff --git a/web/api/queries/README.md b/web/api/queries/README.md index 44cdd05b41..2a17ac7840 100644 --- a/web/api/queries/README.md +++ b/web/api/queries/README.md @@ -88,7 +88,7 @@ To disable alignment, pass `&options=unaligned` to the query. To execute the query, the engine evaluates all dimensions of the chart, one after another. -The engine does not evaluate dimensions that do not match the [simple pattern](/libnetdata/simple_pattern/README.md) +The engine does not evaluate dimensions that do not match the [simple pattern](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) given at the `dimensions` parameter, except when `options=percentage` is given (this option requires all the dimensions to be evaluated to find the percentage of each dimension vs to chart total). diff --git a/web/gui/README.md b/web/gui/README.md index 69db6becbc..fbd7da4df9 100644 --- a/web/gui/README.md +++ b/web/gui/README.md @@ -13,16 +13,16 @@ before: action](https://user-images.githubusercontent.com/1153921/101513938-fae28380-3939-11eb-9434-8ad86a39be62.gif) Learn more about how dashboards work and how they're populated using the `dashboards.js` file in our [web dashboards -overview](/web/README.md). +overview](https://github.com/netdata/netdata/blob/master/web/README.md). By default, Netdata starts a web server for its dashboard at port `19999`. Open up your web browser of choice and navigate to `http://NODE:19999`, replacing `NODE` with the IP address or hostname of your Agent. If you're unsure, try `http://localhost:19999` first. -Netdata uses an [internal, static-threaded web server](/web/server/README.md) to host the HTML, CSS, and JavaScript +Netdata uses an [internal, static-threaded web server](https://github.com/netdata/netdata/blob/master/web/server/README.md) to host the HTML, CSS, and JavaScript files that make up the local Agent dashboard. You don't have to configure anything to access it, although you can adjust -[your settings](/web/server/README.md#other-netdataconf-web-section-options) in the `netdata.conf` file, or run Netdata -behind an [Nginx proxy](https://learn.netdata.cloud/docs/agent/running-behind-nginx), and so on. +[your settings](https://github.com/netdata/netdata/blob/master/web/server/README.md#other-netdataconf-web-section-options) in the `netdata.conf` file, or run Netdata +behind an [Nginx proxy](https://github.com/netdata/netdata/blob/master/docs/Running-behind-nginx.md), and so on. ## Navigating the local dashboard @@ -40,8 +40,8 @@ dashboard](https://user-images.githubusercontent.com/1153921/101509403-f7e59400- Netdata is broken up into multiple **sections**, such as **System Overview**, **CPU**, **Disk**, and more. Inside each section you'll find a number of charts, -broken down into [contexts](/web/README.md#contexts) and -[families](/web/README.md#families). +broken down into [contexts](https://github.com/netdata/netdata/blob/master/web/README.md#contexts) and +[families](https://github.com/netdata/netdata/blob/master/web/README.md#families). An example of the **Memory** section on a Linux desktop system. @@ -69,7 +69,7 @@ Use the calendar to select multiple days. Click on a date to begin the timeframe Click **Apply** to re-render all visualizations with new metrics data, or **Clear** to restore the default timeframe. -[Increase the metrics retention policy](/docs/store/change-metrics-storage.md) for your node to see more historical +[Increase the metrics retention policy](https://github.com/netdata/netdata/blob/master/docs/store/change-metrics-storage.md) for your node to see more historical timeframes. ### Metrics menus @@ -80,7 +80,7 @@ section, and menus link to the section they're associated with. ![A screenshot of metrics menus](https://user-images.githubusercontent.com/1153921/80834638-f08f2880-8ba5-11ea-99ae-f610b2885fd6.png) Most metrics menu items will contain several **submenu** entries, which represent any -[families](/web/README.md#families) from that section. Netdata automatically +[families](https://github.com/netdata/netdata/blob/master/web/README.md#families) from that section. Netdata automatically generates these submenu entries. Here's a **Disks** menu with several submenu entries for each disk drive and @@ -100,7 +100,7 @@ a War Room's name to jump to the Netdata Cloud web interface. menus](https://user-images.githubusercontent.com/1153921/80837210-3f8b8c80-8bab-11ea-9c75-128c2d823ef8.png) If you want to know more about how Cloud populates this menu, and the Agent-Cloud integration at a high level, see our -document on [using the Agent with Netdata Cloud](/docs/agent-cloud.md). +document on [using the Agent with Netdata Cloud](https://github.com/netdata/netdata/blob/master/docs/agent-cloud.md). ## Customizing the local dashboard @@ -163,5 +163,5 @@ file](https://user-images.githubusercontent.com/1153921/62798924-570e6c80-ba94-1 ## Custom dashboards -For information on creating custom dashboards from scratch, see the [custom dashboards](/web/gui/custom/README.md) or -[Atlassian Confluence dashboards](/web/gui/confluence/README.md) guides. +For information on creating custom dashboards from scratch, see the [custom dashboards](https://github.com/netdata/netdata/blob/master/web/gui/custom/README.md) or +[Atlassian Confluence dashboards](https://github.com/netdata/netdata/blob/master/web/gui/confluence/README.md) guides. diff --git a/web/gui/confluence/README.md b/web/gui/confluence/README.md index 64dacdf38d..9e7b8025f4 100644 --- a/web/gui/confluence/README.md +++ b/web/gui/confluence/README.md @@ -85,7 +85,7 @@ This badge is now auto-refreshing. It will update itself based on the update fre > Keep in mind you can add badges with custom Netdata queries too. Netdata automatically creates badges for all the > alarms, but every chart, every dimension on every chart, can be used for a badge. And Netdata badges are quite -> powerful! Check [Creating Badges](/web/api/badges/README.md) for more information on badges. +> powerful! Check [Creating Badges](https://github.com/netdata/netdata/blob/master/web/api/badges/README.md) for more information on badges. So, let's create a table and add this badge for both our web servers: diff --git a/web/gui/custom/README.md b/web/gui/custom/README.md index 23cd924ead..0751f20874 100644 --- a/web/gui/custom/README.md +++ b/web/gui/custom/README.md @@ -245,7 +245,7 @@ Each chart can get data from a different Netdata server. You can specify the Net > ``` -If you have ephemeral monitoring setup ([More info here](/streaming/README.md#monitoring-ephemeral-nodes)) and have no +If you have ephemeral monitoring setup ([More info here](https://github.com/netdata/netdata/blob/master/streaming/README.md#monitoring-ephemeral-nodes)) and have no direct access to the nodes dashboards, you can use the following: ```html @@ -369,7 +369,7 @@ select specific dimensions using this: ``` Netdata supports coma (`,`) or pipe (`|`) separated [simple -patterns](/libnetdata/simple_pattern/README.md) for dimensions. By default it +patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) for dimensions. By default it searches for both dimension IDs and dimension NAMEs. You can control the target of the match with: `data-append-options="match-ids"` or `data-append-options="match-names"`. Spaces in `data-dimensions=""` are matched @@ -437,7 +437,7 @@ it, using this: ### API options -You can append Netdata **[REST API v1](/web/api/README.md)** data options, using this: +You can append Netdata **[REST API v1](https://github.com/netdata/netdata/blob/master/web/api/README.md)** data options, using this: ```html