Logs Management (#13291)

This PR adds the logs-management external plugin. 
See the included README for an extensive list of features. 
-------------------------------------------------------------------------------------

* Add proper status return in JSON response of functions

* Add column info to functions

* Escape special characters when returning JSON response

* Add proper functions help and defaults. Fix help not working

* Add 'logs_management_meta' object in functions results

* Fix compiler warnings

* Replace tabs with 3 spaces in web_client_api_request_v1_logsmanagement_sources()

* Add 'sources' in functions to display list of log sources

* Update functions column values for logs

* Update chart titles and remove '/s' from units

* Add support for compound queries in circular buffers

* Refactor circ_buff_search() to get rid of circ_buff_search_compound()

* Fix incorrect docker events nano timestamp padding

* Fixed botched rebasing

* Replace get_unix_time_ms() with now_realtime_msec()

* Remove binary generation from Fluent-Bit lib build

* Fix compiler warnings due to new timestamp type

* Remove STDIN and STDOUT support from Fluent-Bit library

* Initial support for FLB_KMSG kernel logs collection

* Add kernel logs charts

* Add kernel logs subsystem and device charts

* Skip collection of pre-existing logs in kmsg ring buffer

* Add example of custom kmsg charts

* Add extra initialization error logs

* Fix bug of Docker Events collector failure disabling whole logs management engine

* Remove reduntant FLB output code

* Remove some obsolete TODO comments

* Remove some commented out error/debug prints

* Disable some Fluent-Bit config options not required

* Make circular buffer spare items option configurable

* Add DB mode configuration option

* Replace p_file_infos_arr->data[i] with p_file_info in db_api.c

* Remove db_loop due to all function calls being synchronous

* Add initial README.md

* Add DB mode = none changes

* Add a simple webpage to visualize log query results

* Add support for source selection to logs_query.html

* Add option to query multiple log sources

* Mark non-queryable sources as such in logs_query.html

* Add option to use either GET or functions request in logs_query.html

* Install logs_query.html when running stress tests

* Update README.md requirements

* Change installer behavior to build logs management by default

* Disable logs management at runtime by default

* Add global db mode configuration in 'logs management' config section

* Split logsmanagement.conf into required & optional sections

* Remove --enable-logsmanagement from stress test script

* Add global config option for 'circular buffer max size MiB'

* Add global config option for 'circular buffer drop logs if full'

* Update 'General Configuration' in README.md

* Add global config option for remaining optional settings

* Add systemd collector requirements to TOC

* README: Convert general configuration to table

* README: Fix previous botched commit

* Enable logs management by default when building for stress testing

* Move logging to collector.log from error.log

* Fix contenttype compilation errors

* Move logging to collector.log in plugin_logsmanagement.c

* Rename 'rows' to 'records' in charts

* Add Netdata error.log parsing

* Add more dashboard descriptions

* Sanitize chart ids

* Attempt to fix failing CI

* Update README.md

* Update README.md

* Another attempt to fix CI failures

* Fix undefined reference to 'uv_sleep' on certain platforms

* Support FLB forward input and FLB output plugins.

Squashed commit of the following:

commit 55e2bf4fb34a2e02ffd0b280790197310a5299f3
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Apr 13 16:41:09 2023 +0300

    Remove error.log from stock config

commit bbdc62c2c9727359bc3c8ef8c33ee734d0039be7
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Apr 13 16:37:48 2023 +0300

    Add cleanup of Fluent Bit outputs in p_file_info_destroy()

commit 09b0aa4268ec1ccef160c99c5d5f31b6388edd28
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Apr 13 14:34:17 2023 +0300

    Some code and config cleanup

commit 030d074667d5ee2cad10f85cd836ca90e29346ad
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Apr 13 13:04:08 2023 +0300

    Enable additional Fluent Bit output plugins for shared library

commit 490aa5d44caa38042521d24c6b886b8b4a59a73c
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Apr 13 01:33:19 2023 +0300

    Add initialization of Fluent Bit user-configured outputs

commit c96e9fe9cea96549aa5eae09d0deeb130da02793
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Apr 4 23:13:16 2023 +0100

    Complete read of parameters for FLB outputs config

commit 00988897f9b86d1ecc5c141b19df7ad7d74f7e96
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Apr 3 19:43:31 2023 +0100

    Update README.md

commit 6deea5399c2707942aeaa51408f999ca45dfd351
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Apr 3 16:02:28 2023 +0100

    Refactor Syslog_parser_config_t and add Flb_socket_config_t

commit 7bf998a4c298bbd489ef735c56a6e85a137772c9
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Apr 3 14:19:57 2023 +0100

    Update README.md

commit c353d194b12c54f134936072ebaded0424d73cc0
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Mar 31 14:52:57 2023 +0100

    Update README.md

commit 6be726eaff3738ba7884de799aa52949833af65a
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Mar 31 13:06:29 2023 +0100

    Update README. Fix docker_events streaming

commit 6aabfb0f1ef0529a7a0ecbaf940bc0952bf42518
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Mar 30 21:27:45 2023 +0100

    Fix stuck in infinite loop bug for FLB_GENERIC, FLB_WEB_LOG and FLB_SERIAL remote log sources

commit eea6346b708cc7a5ce6e2249366870f4924eabae
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Mar 30 21:04:12 2023 +0100

    Remove callback that searches for streamed p_file_info match

commit bc9c5a523b0b0ab5588adbff391a43ba8d9a0cdf
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Mar 30 15:51:39 2023 +0100

    Basic streaming works

commit 4c80f59f0214bc07895f0b2edca47cb02bc06420
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Mar 28 22:05:22 2023 +0100

    WIP

commit eeb37a71b602fb0738fe8077ccddc0a8ce632304
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Mar 27 22:52:09 2023 +0100

    Add generic forward streaming input

commit 1459b91847c80c4d97de96b75b00771039458ad6
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Mar 23 18:50:14 2023 +0000

    FLB_FORWARD: WIP

* Add number of logs per item in DB and in queries response

* Fix wrong number of lines stored in DB for web logs

* Refactor number of logs parsers and charts code

* Add option to toggle number of collected logs metrics and charts

* Disable kmsg log collector by default

* Fix logs_query.html to work with any server ip

* Fix regressed wrong number of web log lines bug

* Change query quota type from size_t to long long

* Update alpine version when searching for fts-dev requirements

* Update query results to return both requested and actual quota

* Fix bug of circ buffs not being read if head == read but not empty

* Squashed commit of the following:

commit 34edb316a737f3edcffcf8fa88a3801599011495
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu May 4 20:02:36 2023 +0100

    Comment out some debug prints

commit 51b9b87a88516186530f5b4b65f785b543fefe8c
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Apr 28 19:21:54 2023 +0100

    Fix wrong filenames in BLOBS_TABLE after rotation

commit 6055fc2893b48661af324f20ee61511a40abbc02
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Apr 28 12:22:04 2023 +0100

    Add chart showing number of circular buffer items

commit 0bb5210b0847f4b7596f633ec96fc10aa8ebc791
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Apr 25 16:47:29 2023 +0300

    Various fixes.

    Fix num_lines calculation.
    Add debug prints for circ buffers.
    Remove circ buff spare items option.
    Fix calculation of circ buff memory consumption.
    Add buff_realloc_rwlock for db_mode = none case.
    Fix circ buff read to be done correctly when buff is full.

commit f494af8c95be84404c7d854494d26da3bcbd3ad7
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Apr 21 16:03:50 2023 +0300

    Fix freez() on non-malloced address

commit cce6d09e9cf9b847aface7309643e2c0a6041390
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Apr 21 15:41:25 2023 +0300

    Add option to dynamically expand circ buffs when full

* Use log timestamps when possible, instead of collection timestamps.
Also, add config options for Fluent Bit engine and remove tail_plugin.

Squashed commit of the following:

commit b16a02eb6e3a90565c90e0a274b87b123e7b18e5
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue May 16 19:38:57 2023 +0100

    Add Fluent Bit service config options to netdata.conf. Add monitoring of new log file fluentbit.log

commit ab77c286294548ea62a3879ac0f8b8bbfe6a0687
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 21:25:17 2023 +0100

    Remove some debug prints

commit 46d64ad2434e69b1d20720297aec1ddb869e1f84
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 21:19:32 2023 +0100

    Fix null values in charts

commit 8ec96821d6a882f28cbd19244ebdfc86c807d2f4
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 17:43:04 2023 +0100

    Update README.md to reflect log timestamp changes

commit 079a91858cf9db2f74711581235bc17eb97c7dad
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 16:23:14 2023 +0100

    Add configurable option for 'update timeout'

commit 72b5e2505d4657fcbb5ccb6eeee00c45eb0b51ff
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 16:05:08 2023 +0100

    Revert logsmanagement.conf to logs-manag-master one

commit 70d0ea6f8d272fff318aa3095d90a78dcc3411a7
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 16:02:00 2023 +0100

    Fix bug of circ buff items not marked as done

commit 5716420838771edb7842be4669bf96235b15cf71
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 15 16:01:41 2023 +0100

    Fix do_custom_charts_update() to work for all log sources

commit a8def8f53fd25c3efa56ef27e267df3261913a8e
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri May 12 18:20:20 2023 +0100

    Remove GENERIC and WEB_LOG cases. Remove tail_plugin.c/h. Remove generic_parser().

commit 1cf05966e33491dbeb9b877f18d1ea8643aabeba
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri May 12 16:54:59 2023 +0100

    Fix FLB_GENERIC and FLB_SERIAL to work with new timestamp logic

commit df3266810531f1af5f99b666fbf44c503b304a39
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri May 12 14:55:04 2023 +0100

    Get rid of *_collect() functions and restructure plugin_logsmanagement workers

commit 3eee069842f3257fffe60dacfc274363bc43491c
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri May 12 14:28:33 2023 +0100

    Fix wrong order of #define _XOPEN_SOURCE 700 in parser.c

commit 941aa80cb55d5a7d6fe8926da930d9803be52312
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu May 11 22:27:39 2023 +0100

    Update plugin_logsmanagement_web_log to use new timestamp logic and to support delayed logs. Refactor req_method metrics code.

commit 427a7d0e2366d43cb5eab7daa1ed82dfc3bc8bc8
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue May 9 20:26:08 2023 +0100

    Update plugin_logsmanagement_kernel to use new timestamp logic and to support delayed charts

commit a7e95a6d3e5c8b62531b671fd3ec7b8a3196b5bb
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue May 9 15:22:14 2023 +0100

    Update plugin_logsmanagement_systemd to use new timestamp logic and support delayed charts

commit 48237ac2ce49c82abdf2783952fd9f0ef05d72e1
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue May 9 13:29:44 2023 +0100

    Refactor number of collected logs chart update code

commit a933c8fcae61c23fa0ec6d0074526ac5d243cf16
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 8 22:11:19 2023 +0100

    Update plugin_logsmanagement_docker_ev to use new timestamp logic and support delayed charts

commit 5d8db057155affd5cb721399a639d75a81801b7f
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri May 5 15:18:06 2023 +0100

    Change some Fluent Bit collectors to use log timestamps instead of collection timestamps

* Remove some unused defines and typedefs

* Improve flb_init()

* Update file-level doxygen. Add SPDX license declaration.

* Better handling of termination of Fluent Bit

* Better handling of DB errors. Various fixes.

Squashed commit of the following:

commit f55feea1274c3857eda1e9d899743db6e3eb5bf5
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Jun 6 13:28:00 2023 +0100

    Fix web log parsing in case of lines terminated by \r

commit 9e05758a4ecfac57a0db14757cff9536deda51d8
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Jun 5 20:42:05 2023 +0100

    Fix warnings due to -Wformat-truncation=2

commit 63477666fa42446d74693aae542580d4e1e81f03
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Jun 5 16:48:45 2023 +0100

    Autodiscovery of Netdata error.log based on netdata_configured_log_dir

commit cab5e6d6061f4259172bbf72666e8b4a3a35dd66
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Jun 5 16:24:39 2023 +0100

    Replace Forward config default string literals with macros

commit 4213398031dbb53afbc943d76bf7df202d12bf6f
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Jun 5 15:56:29 2023 +0100

    Proper cleanup of flb_lib_out_cb *callback in case of error

commit f76fd7cc7bc2d0241e4d3517f61ae192d4246300
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Jun 5 15:36:07 2023 +0100

    Proper termination of Forward input and respective log sources in case of error

commit 3739fd96c29e13298eb3a6e943a63172cdf39d5f
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Jun 1 21:19:56 2023 +0100

    Merge db_search() and db_search_compound()

commit fcface90cb0a6df3c3a2de5e1908b1b3467dd579
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Jun 1 19:17:26 2023 +0100

    Proper error handling in db_search() and db_search_compound(). Refactor the code too.

commit c10667ebee2510a1af77114b3a7e18a0054b5dae
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Jun 1 14:23:34 2023 +0100

    Update DB mode and dir when switching to db_mode_none

commit d37d4c3d79333bb9fa430650c13ad625458620e8
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Jun 1 12:56:13 2023 +0100

    Fix flb_stop() SIGSEGV

commit 892e231c68775ff1a1f052d292d26384f1ef54b1
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue May 30 21:14:58 2023 +0100

    Switch to db_writer_db_mode_none if db_writer_db_mode_full encounters error

commit f7a0c2135ff61d3a5b0460ec5964eb6bce164bd6
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 29 21:41:21 2023 +0100

    Complete error handling changes to db_init(). Add some const type qualifiers. Refactor some code for readability

commit 13dbeac936d22958394cb1aaec394384f5a93fdd
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon May 29 17:14:17 2023 +0100

    More error handling changes in db_init(). Change some global default settings if stress testing.

commit eb0691c269cd09054190bf0ee9c4e9247b4a2548
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri May 26 23:29:12 2023 +0100

    Better handling of db writer threads errors. Add db timings charts

* Fix mystrsep() replaced by strsep_skip_consecutive_separators()

* Fix older GCC failure due to label before declaration

* Fix failed builds when using libuv <= v1.19

* Fix some Codacy warnings

* Fix warning: implicit declaration of function ‘strsep’

* Use USEC_PER_SEC instead of 1000000ULL

* Use UUID_STR_LEN instead of GUID_LEN + 1

* Combine multiple 'ln -sf' Docker instructions to one

* Update README with systemd development libraries requirement

* Comment out mallocz() success checkes in parser_csv()

* Fix shellcheck warnings

* Remove asserts for empty SYSLOG_IDENTIFIER or PID

* Fix FreeBSD failing builds

* Fix some more shellcheck warnings

* Update Alpine fts-dev required packages

* First changes to use web log timestamp for correct metrics timings

* Initial work to add test_parse_web_log_line() unit test

* Complete test_parse_web_log_line() tests

* Improve parse_web_log_line() for better handling of \n, \r, double quotes etc.

* Fix 'Invalid TIME error when timezone sign is negative

* Add more logs to compression unit test case

* Misc read_last_line() improvements

* Fix failing test_auto_detect_web_log_parser_config() when test case terminated without '\n'

* Remove unused preprocessor macro

* Factor out setup of parse_config_expected_num_fields

* Add test for count_fields()

* Add unit test for read_last_line()

* Fix a read_last_line() bug

* Remove PLUGIN[logsmanagement] static thread and update charts synchronously, right before data buffering

* Fix web log parser potential SIGSEGV

* Fix web log metrics bug where they could show delayed by 1 collection interval

* WIP: Add multiline support to kmsg logs and fix metric timings

* Fix kmsg subsystem and device parsing and metrics

* Add option 'use log timestamp' to select between log timestamps or collection timestamps

* Add 'Getting Started' docs section

* Move logs management functions code to separate source files

* Add 'Nginx access.log' chart description

* Remove logsmanagement.plugin source files

* Fix some memory leaks

* Improve cleanup of logsmanagement_main()

* Fix a potential memory leak of fwd_input_out_cb

* Better termination and cleanup of main_loop and its handles

* Fix main_db_dir access() check bug

* Avoid uv_walk() SIGSEGV

* Remove main_db_dir access() check

* Better termination and cleanup of DB code

* Remove flb_socket_config_destroy() that could cause a segmentation fault

* Disable unique client IPs - all-time chart by default

* Update README.md

* Fix debug() -> netdata_log_debug()

* Fix read_last_line()

* Fix timestamp sign adjustment and wrong unit tests

* Change WEB_CLIENT_ACL_DASHBOARD to WEB_CLIENT_ACL_DASHBOARD_ACLK_WEBRTC

* Do not parse web log timestamps if 'use_log_timestamp = no'

* Add Logs Management back into buildinfo.c

* Update README.md

* Do not build Fluent Bit executable binary

* Change logs rate chart to RRDSET_TYPE_LINE

* Add kludge to prevent metrics breaking due to out of order logs

* Fix wrong flb_tmp_buff_cpy_timer expiration

* Refactor initialization of input plugin for local log sources.

* Rename FLB_GENERIC collector to FLB_TAIL.

* Switch 'Netdata fluentbit.log' to disabled by default

* Add 'use inotify' configuration option

* Update  in README.md

* Add docker event actions metrics

* Update README.md to include event action chart

* Remove commented out PLUGIN[logsmanagement] code block

* Fix some warnings

* Add documentation for outgoing log streaming and exporting

* Fix some code block formatting in README.md

* Refactor code related to error status of log query results and add new invalid timestamp case

* Reduce query mem allocs and fix end timestamp == 0 bug

* Add support for duplicate timestamps in db_search()

* Add support for duplicate timestamps in circ_buff_search()

* Fix docker events contexts

* Various query fixes prior to reverse order search.

- Add reverse qsort() function in circ buffers.
- Fix issues to properly support of duplicate timestamps.
- Separate requested from actual timestamps in query parameters.
- Rename results buffer variable name to be consistent between DB and
  buffers.
- Remove default start and end timestamp from functions.
- Improve handling of invalid quotas provided by users.
- Rename 'until' timestamp name to 'to'.
- Increase default quota to 10MB from 1MB.
- Allow start timestamp to be > than end timestamp.

* Complete descending timestamp search for circular buffers

* Complete descending timestamp search for DB

* Remove MEASURE_QUERY_TIME code block

* Complete descending timestamp search when data resides in both DB and circular buffers

* Use pointer instead of copying res_hdr in query results

* Refactor web log timezone parsing to use static memory allocation

* Add stats for CPU user & system time per MiB of query results

* Micro-optimization to slightly speed up queries

* More micro-optimizations and some code cleanup

* Remove LOGS_QUERY_DATA_FORMAT_NEW_LINE option

* Escape iscntrl() chars at collection rather at query

* Reduce number of buffer_strcat() calls

* Complete descending timestamp order queries for web_api_v1

* Complete descending timestamp order queries for functions

* Fix functions query timings to match web_api_v1 ones

* Add MQTT message collector

Squashed commit of the following:

commit dbe515372ee04880b1841ef7800abe9385b12e1c
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Aug 21 15:18:46 2023 +0100

    Update README.md with MQTT information

commit c0b5dbcb7cdef8c6fbd5e72e7bdd08957a0fd3de
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Aug 21 14:59:36 2023 +0100

    Tidy up before merge

commit 9a69c4f17eac858532918a8f850a770b12710f80
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Aug 21 12:54:33 2023 +0100

    Fix issue with duplicate Log_Source_Path in DB, introduced in commit e417af3

commit 48213e9713216d62fca8a5bc1bbc41a3883fdc14
Author: Dim-P <dimitris1703@gmail.com>
Date:   Sat Aug 19 05:05:36 2023 +0100

    WIP

commit e417af3b947f11bd61e3255306bc95953863998d
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Aug 17 18:03:39 2023 +0100

    Update functions logsmanagement help output

* Inhibit Fluent Bit build warnings

* Fix missing allow_subpaths value in api_commands_v1[].

* Fix missing HTTP_RESP_BACKEND_FETCH_FAILED error

* Fix an enum print warning

* Remove systemd-devel requirement from README and fix codacy warnings

* Update Alpine versions for musl-fts-dev

* Update Fluent Bit to v2.1.8

Squashed commit of the following:

commit faf6fc4b7919cc2611124acc67cb1973ce705530
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Aug 25 17:13:30 2023 +0100

    Fix wrong default CORE_STACK_SIZE on Alpine

commit a810238fe7830ce626f6d57245d68035b29723f7
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Aug 25 00:40:02 2023 +0100

    Update Fluent Bit patches for musl

commit 8bed3b611dba94a053e22c2b4aa1d46f7787d9b4
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Aug 24 21:54:38 2023 +0100

    Fix an edge case crash when web log method is '-'

commit b29b48ea230363142697f9749508cd926e18ee19
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Aug 24 16:26:13 2023 +0100

    Disable FLB_OUT_CALYPTIA to fix Alpine dlsym() error

commit eabe0d0523ffe98ff881675c21b0763a49c05f16
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Aug 22 21:25:54 2023 +0100

    Add 'use inotify = no' troubleshooting Q&A in README

commit 7f7ae85bdb0def63b4fc05ab88f6572db948e0e7
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Aug 22 18:06:36 2023 +0100

    Update README.md links to latest version

commit 610c5ac7b920d4a1dfe364ad48f1ca14a0acc346
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Aug 22 16:23:30 2023 +0100

    Update flb_parser_create() definition

commit f99608ff524b6f3462264e626a1073f9c2fdfdf5
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Aug 22 16:23:04 2023 +0100

    Add new config.cmake options

commit 446b0d564626055a0a125f525d0bd3754184b830
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Aug 22 12:21:25 2023 +0100

    Update Fluent Bit submodule to v2.1.8

* Add logs_management_unittest() to CI 'unittest'

* Remove obsolete query testing files

* Patch Fluent Bit log format to match netdata's format

* Update README with instructions on how to monitor Podman events logs

* Fix core dump in case of flb_lib_path dlopen()

* Fix some potential compiler warnings

* Fix queries crash if logs manag engine not running

* Much faster termination of LOGS MANAGEMENT

* Add facets support and other minor fixes.

logsmanagement_function_execute_cb() is replaced by
logsmanagement_function_facets() which adds facets support to logs
management queries.

Internal query results header now includes additional fields
(log_source, log_type, basename, filename, chartname), that are used as facets.

Queries now support timeout as a query parameter.

A web log timestamp bug is fixed (by using timegm() instead of mktime().

web_api_v1 logsmanagement API is only available in debugging now.

Squashed commit of the following:

commit 32cf0381283029d793ec3af30d96e6cd77ee9149
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 19 16:21:32 2023 +0300

    Tidy up

commit f956b5846451c6b955a150b5d071947037e935f0
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 19 13:30:54 2023 +0300

    Add more accepted params. Add data_only option. Add if_modified_since option.

commit 588c2425c60dcdd14349b7b346467dba32fda4e9
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Sep 18 18:39:50 2023 +0300

    Add timeout to queries

commit da0f055fc47a36d9af4b7cc4cefb8eb6630e36d9
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 14 19:17:16 2023 +0300

    Fix histogram

commit 7149890974e0d26420ec1c5cfe1023801dc973fa
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 14 17:58:52 2023 +0300

    Add keyword query using simple patterns and fix descending timestamp values

commit 0bd068c5a76e694b876027e9fa5af6f333ab825b
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 14 13:54:05 2023 +0300

    Add basename, filename, chartname as facets

commit 023c2b5f758b2479a0e48da575cd59500a1373b6
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 14 13:26:06 2023 +0300

    Add info and sources functions options

commit ab4d555b7d445f7291af474847bd9177d3726a76
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 14 12:54:37 2023 +0300

    Fix facet id filter

commit a69c9e2732f5a6da1764bb57d1c06d8d65979225
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 14 12:07:13 2023 +0300

    WIP: Add facet id filters

commit 3c02b5de81fa8a20c712863c347539a52936ddd8
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 12 18:19:17 2023 +0300

    Add log source and log type to circ buff query results header

commit 8ca98672c4911c126e50f3cbdd69ac363abdb33d
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 12 18:18:13 2023 +0300

    Fix logsmanagement facet function after master rebasing

commit 3f1517ad56cda2473a279a8d130bec869fc2cbb8
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 12 18:14:25 2023 +0300

    Restrict /logsmanagement to ACL_DEV_OPEN_ACCESS only

commit 8ca98d69b08d006c682997268d5d2523ddde6be0
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 12 14:40:22 2023 +0300

    Fix incorrectly parsed timestamps due to DST

commit f9b0848037b29c7fcc46da951ca5cd9eb129066f
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Sep 11 13:42:18 2023 +0300

    Add logs_management_meta object to facet query results

commit babc978f6c97107aaf8b337d8d31735d61761b6a
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Sep 11 13:03:52 2023 +0300

    Query all sources if no arguments provided

commit 486d56de87af56aae6c0dc5d165341418222ce8b
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 7 18:38:04 2023 +0300

    Add log_source and log_type (only for DB logs) as facets. Add relative time support

commit b564c12843d355c4da6436af358d5f352cb58bfe
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 7 13:47:20 2023 +0300

    Working facet with descending timestamps

commit 68c6a5c64e8425cf28ec16adfb0c50289caa82a9
Author: Dim-P <dimitris1703@gmail.com>
Date:   Wed Sep 6 01:55:51 2023 +0300

    WIP

* Fix linking errors

* Convert logs management to external plugin.

Squashed commit of the following:

commit 16da6ba70ebde0859aed734087f04af497ce3a77
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 18:44:12 2023 +0100

    Use higher value of update every from netdata.conf or logsmanagement.d.conf

commit 88cc3497c403e07686e9fc0876ebb0c610a1404c
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 18:43:02 2023 +0100

    Tidy up

commit c3fca57aac169842637d210269519612b1a91e28
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 18:02:04 2023 +0100

    Use external update_every from agent, if available

commit f7470708ba82495b03297cdf8962a09b16617ddd
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 17:40:46 2023 +0100

    Re-enable debug logs

commit b34f5ac6a2228361ab41df7d7e5e713f724368c0
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 15:49:20 2023 +0100

    Remove old API calls from web_api_v1.c/h

commit 7fbc1e699a7785ec837233b9562199ee6c7684da
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 15:32:04 2023 +0100

    Add proper termination of stats charts thread

commit 4c0fc05c8b14593bd7a0aa68f75a8a1205e04db4
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 24 15:31:36 2023 +0100

    Add tests for logsmanag_config functions

commit 4dfdacb55707ab46ed6c2d5ce538ac012574b27e
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 23 22:01:19 2023 +0100

    Remove unused headers from logsmanagement.c

commit b324ef396207c5c32e40ea9ad462bf374470b230
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 23 21:56:26 2023 +0100

    Remove inline from get_X_dir() functions

commit e9656e8121b66cd7ef8b5daaa5d27a134427aa35
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 23 21:50:32 2023 +0100

    Proper termination when a signal is received

commit b09eec147bdeffae7b268b6335f6ba89f084e050
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 23 20:12:13 2023 +0100

    Refactor logs management config code in separate source files

commit 014b46a5008fd296f7d25854079c518d018abdec
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 23 14:54:47 2023 +0100

    Fix p_file_info_destroy() crash

commit e0bdfd182513bb8d5d4b4b5b8a4cc248ccf2d64e
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 23 14:18:27 2023 +0100

    Code refactoring and cleanup

commit 6a61cb6e2fd3a535db150b01d9450f44b3e27b30
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Oct 20 14:08:43 2023 +0100

    Fix 'source:all' queries

commit 45b516aaf819ac142353e323209b7d01e487393f
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 19 21:51:05 2023 +0100

    Working 'source:...' queries and regular data queries (but not 'source:all')

commit 8064b0ee71c63da9803f79424802f860e96326e5
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 19 15:34:23 2023 +0100

    Fix issue due to p_file_info_destroy()

commit a0aacc9cd00cea60218c9bfd2b9f164918a1e3de
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 17 22:06:34 2023 +0100

    Work on facet API changes

commit 480584ff9040c07e996b14efb4d21970a347633f
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 16 21:43:06 2023 +0100

    Add stats charts, running as separate thread

commit 34d582dbe4bf2d8d048afab41681e337705bc611
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 16 16:24:02 2023 +0100

    Add SSL cipher charts

commit ced27ee4e2c981d291f498244f2eef2556a074fb
Author: Dim-P <dimitris1703@gmail.com>
Date:   Sun Oct 15 21:33:29 2023 +0100

    Add Response code family, Response code, Response code type, SSL protocol charts

commit 40c4a1d91892d49b1e4e18a1c3c43258ded4014d
Author: Dim-P <dimitris1703@gmail.com>
Date:   Sat Oct 14 00:48:48 2023 +0100

    Add more web log charts

commit 890ed3ff97153dd18d15df2d1b57a181bc498ca8
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Oct 13 22:14:11 2023 +0100

    Add web log vhosts and ports charts

commit 84733b6b1d353aff70687603019443610a8500c3
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 12 21:40:16 2023 +0100

    Add systemd charts

commit 14673501e8f48560956f53d5b670bbe801b8f2ae
Author: Dim-P <dimitris1703@gmail.com>
Date:   Wed Oct 11 00:28:43 2023 +0100

    Add MQTT charts

commit 366eb63b0a27dde6f0f8ba65120f34c18c1b21fd
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 10 21:46:19 2023 +0100

    Complete kmsg changes. Reduce mem usage. Fix a dictionary key size bug

commit 3d0216365a526ffbc9ce13a20c45447bfccb47d9
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 10 19:18:41 2023 +0100

    Add kmsg Subsystem charts

commit e61af4bb130a5cf5a5a78133f1e44b2b4c457b24
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 10 16:21:29 2023 +0100

    Fix bug of wrong kmsg timestamps in case of use_log_timestamp == 0

commit 03d22e0b26bddf249aab431a4f977bbd5cde98ca
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 10 16:20:47 2023 +0100

    Add kmsg charts, except for Subsystem and Device

commit f60b0787537a21ed3c4cea5101fcddc50f3bc55a
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 10 13:12:13 2023 +0100

    Initialise all docker events chart dimensions at startup

commit 5d873d3439abaf3768530cb5b72c6b4ef6565353
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 10 00:53:35 2023 +0100

    WIP: Add Docker events logs

commit 2cc3d6d98f58fc3ab67a8da3014210b14d0926a1
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 9 18:52:27 2023 +0100

    Use macros for num_of_logs_charts and custom_charts functions

commit fbd48ad3c9af674601238990d74192427475f2e3
Author: Dim-P <dimitris1703@gmail.com>
Date:   Mon Oct 9 18:26:17 2023 +0100

    Refactor custom charts code for clarity and speed

commit a31d80b5dc91161c0d74b10d00bc4fd1e6da7965
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 5 23:58:27 2023 +0100

    Add first working iteration of custom charts

commit b1e4ab8a460f4b4c3e2804e2f775787d21fbee45
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 5 23:57:27 2023 +0100

    Add more custom charts for Netdata error.log

commit f1b7605e564da3e297942f073593cdd4c21f88e1
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 5 20:39:40 2023 +0100

    Convert collected_logs_* chart updates to macros

commit 1459bc2b8bcd5ba21e024b10a8a5101048938f71
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Oct 5 19:11:54 2023 +0100

    Use rrdset_timed_done() instead of duration_since_last_update for correct chart timings

commit 876854c6ee7586a3eb9fdbf795bcc17a5fd1e6ad
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 3 21:53:14 2023 +0100

    Fix some bugs in chart updates

commit ae87508485499984bcb9b72bbc7d249c4168b380
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Oct 3 21:32:55 2023 +0100

    Functioning generic_chart_init() and generic_chart_update()

commit 982a9c4108dbea9571c785b5ff8a9d1e5472066c
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 28 23:32:52 2023 +0100

    Add support for multiple .conf files. Add stock examples.

commit 8e8abd0731227eb3fb3c6bcd811349575160799e
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 28 17:38:30 2023 +0100

    Add support for logsmanagement.d/default.conf

commit 1bf0732217b1d9e9959e1507ea96fc2c92ffb2ff
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 28 14:31:03 2023 +0100

    Add capabilities. Fix paths in logsmanagement.d.conf

commit a849d5b405bb4e5d770726fe99413a4efa7df274
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 26 23:06:31 2023 +0100

    Change logs_manag_config_load()

commit b0d1783b996286cd87e0832bfb74c29a845d61fc
Author: Dim-P <dimitris1703@gmail.com>
Date:   Tue Sep 26 15:35:30 2023 +0100

    Working unit tests and argument parsing

commit 6da1b4267a4d58d3a7cbcca9507afe8158a2e324
Author: Dim-P <dimitris1703@gmail.com>
Date:   Fri Sep 22 00:32:47 2023 +0300

    Build logs-management.plugin successfully

commit 9e30efe0422e4941f99cc66998d9f42e00a24676
Author: Dim-P <dimitris1703@gmail.com>
Date:   Thu Sep 21 16:13:21 2023 +0300

    Fix print format specifier in web_client_api_request_v1_logsmanagement()

* Modify CODEOWNERS

* Update README.md

Fix indentation

* Change snprintfz() to stncpyz() in circ_buff_search(). Change remaining 'chart_name' to 'chartname'.

* Replace SQLite version function with macro

* Fix some codacy warnings

* Update README.md

* Update Fluent Bit to v2.1.10

* Remove some comments

* Fix Fluent Bit shared library linking for armv7l and FreeBSD

* Remove compression source files

* Add prefix to rrd_api.h functions

* Add more unit tests

* Fix kmsg capabilities

* Separate kmsg and systemd default paths

* Fix some memory leaks and better termination of DB

* Add iterative queries if quota is exceeded

* Fix centos7 builds

* Fix issue where SYSTEMD timestamps are not parsed

* Fix logs management packaging.

* Fix typo in DEB control file.

* Fix indentation and missing new line at EOF

* Clean up functions and update help

* Fix 400 error when no queryable sources are available

* Fix if_modified_since. Add FACET_MAX_VALUE_LENGTH

* Add delta parameter and use anchor points in queries

* Fix CodeQL #182 warning

* Fix packaging issues.

* Fix postinstall script for DEB packages.

* Improve plugin shutdown speed

* Fix docker events chart grouping

* Fix functions evloop threads not terminating upon shutdown

* Fix coverity issues

* Fix logging

* Replace 'Netdata error.log' with 'Netdata daemon.log' in 'default.conf'

* Remove 'enabled = yes/no' config in logsmanagement.d.conf

* Remove 'enabled = X' unused config from logsmanagement.d.conf

---------

Co-authored-by: Austin S. Hemmelgarn <austin@netdata.cloud>
This commit is contained in:
Dimitris P 2023-11-27 16:55:14 +00:00 committed by GitHub
parent 1f0164ede4
commit 4e512411ec
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
99 changed files with 14778 additions and 58 deletions

1
.github/CODEOWNERS vendored
View File

@ -32,6 +32,7 @@ system/ @Ferroin @tkatsoulas
tests/ @Ferroin @vkalintiris @tkatsoulas
web/ @thiagoftsm @vkalintiris
web/gui/ @novykh
logsmanagement/ @Dim-P @thiagoftsm
# Ownership by filetype (overwrites ownership by directory)
*.am @Ferroin @tkatsoulas

3
.github/labeler.yml vendored
View File

@ -153,3 +153,6 @@ area/tests:
area/web:
- web/**
area/logs-management:
- logsmanagement/**

View File

@ -234,6 +234,7 @@ jobs:
./.git/*
packaging/makeself/makeself.sh
packaging/makeself/makeself-header.sh
./fluent-bit/*
yamllint:
name: yamllint

5
.gitignore vendored
View File

@ -85,6 +85,9 @@ debugfs.plugin
systemd-journal.plugin
!systemd-journal.plugin/
logs-management.plugin
!logs-management.plugin/
# protoc generated files
*.pb.cc
*.pb.h
@ -153,6 +156,8 @@ collectors/ioping.plugin/ioping.plugin
collectors/go.d.plugin
web/netdata-switch-dashboard.sh
logsmanagement/stress_test/stress_test
# installer generated files
/netdata-uninstaller.sh
/netdata-updater.sh

5
.gitmodules vendored
View File

@ -13,3 +13,8 @@
path = web/server/h2o/libh2o
url = https://github.com/h2o/h2o.git
ignore = untracked
[submodule "fluent-bit"]
path = fluent-bit
url = https://github.com/fluent/fluent-bit.git
shallow = true
ignore = dirty

View File

@ -1074,6 +1074,44 @@ ELSE()
message(STATUS "ML: disabled")
ENDIF()
set(LOGSMANAGEMENT_FILES
logsmanagement/rrd_api/rrd_api_docker_ev.c
logsmanagement/rrd_api/rrd_api_docker_ev.h
logsmanagement/rrd_api/rrd_api_generic.c
logsmanagement/rrd_api/rrd_api_generic.h
logsmanagement/rrd_api/rrd_api_kernel.c
logsmanagement/rrd_api/rrd_api_kernel.h
logsmanagement/rrd_api/rrd_api_mqtt.c
logsmanagement/rrd_api/rrd_api_mqtt.h
logsmanagement/rrd_api/rrd_api_stats.c
logsmanagement/rrd_api/rrd_api_stats.h
logsmanagement/rrd_api/rrd_api_systemd.c
logsmanagement/rrd_api/rrd_api_systemd.h
logsmanagement/rrd_api/rrd_api_web_log.c
logsmanagement/rrd_api/rrd_api_web_log.h
logsmanagement/rrd_api/rrd_api.h
logsmanagement/unit_test/unit_test.c
logsmanagement/unit_test/unit_test.h
logsmanagement/circular_buffer.c
logsmanagement/circular_buffer.h
logsmanagement/db_api.c
logsmanagement/db_api.h
logsmanagement/file_info.h
logsmanagement/flb_plugin.c
logsmanagement/flb_plugin.h
logsmanagement/functions.c
logsmanagement/functions.h
logsmanagement/helper.h
logsmanagement/defaults.h
logsmanagement/logsmanag_config.c
logsmanagement/logsmanag_config.h
logsmanagement/logsmanagement.c
logsmanagement/parser.c
logsmanagement/parser.h
logsmanagement/query.c
logsmanagement/query.h
)
set(NETDATA_FILES
collectors/all.h
${DAEMON_FILES}
@ -1112,6 +1150,13 @@ add_definitions(
-DVARLIB_DIR="/var/lib/netdata"
)
# -----------------------------------------------------------------------------
# logs management
IF(ENABLE_LOGSMANAGEMENT)
list(APPEND NETDATA_FILES ${LOGSMANAGEMENT_FILES})
ENDIF()
# -----------------------------------------------------------------------------
# kinesis exporting connector
@ -1748,7 +1793,6 @@ endif()
endif()
endif()
# generate config.h so that CMake becomes independent of automake
## netdata version

View File

@ -114,6 +114,7 @@ SUBDIRS += \
web \
claim \
spawn \
logsmanagement \
$(NULL)
AM_CFLAGS = \
@ -349,6 +350,51 @@ LOG2JOURNAL_FILES = \
libnetdata/log/log2journal.c \
$(NULL)
LOGSMANAGEMENT_FILES = \
logsmanagement/circular_buffer.c \
logsmanagement/circular_buffer.h \
logsmanagement/db_api.c \
logsmanagement/db_api.h \
logsmanagement/defaults.h \
logsmanagement/file_info.h \
logsmanagement/flb_plugin.c \
logsmanagement/flb_plugin.h \
logsmanagement/functions.c \
logsmanagement/functions.h \
logsmanagement/helper.h \
logsmanagement/logsmanag_config.c \
logsmanagement/logsmanag_config.h \
logsmanagement/logsmanagement.c \
logsmanagement/parser.c \
logsmanagement/parser.h \
logsmanagement/query.c \
logsmanagement/query.h \
logsmanagement/rrd_api/rrd_api_docker_ev.c \
logsmanagement/rrd_api/rrd_api_docker_ev.h \
logsmanagement/rrd_api/rrd_api_generic.c \
logsmanagement/rrd_api/rrd_api_generic.h \
logsmanagement/rrd_api/rrd_api_kernel.c \
logsmanagement/rrd_api/rrd_api_kernel.h \
logsmanagement/rrd_api/rrd_api_mqtt.c \
logsmanagement/rrd_api/rrd_api_mqtt.h \
logsmanagement/rrd_api/rrd_api_stats.c \
logsmanagement/rrd_api/rrd_api_stats.h \
logsmanagement/rrd_api/rrd_api_systemd.c \
logsmanagement/rrd_api/rrd_api_systemd.h \
logsmanagement/rrd_api/rrd_api_web_log.c \
logsmanagement/rrd_api/rrd_api_web_log.h \
logsmanagement/rrd_api/rrd_api.h \
database/sqlite/sqlite3.c \
database/sqlite/sqlite3.h \
$(LIBNETDATA_FILES) \
$(NULL)
LOGSMANAGEMENT_TESTS_FILES = \
logsmanagement/unit_test/unit_test.c \
logsmanagement/unit_test/unit_test.h \
$(NULL)
CUPS_PLUGIN_FILES = \
collectors/cups.plugin/cups_plugin.c \
$(LIBNETDATA_FILES) \
@ -1342,6 +1388,17 @@ systemd_cat_native_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(NULL)
if ENABLE_LOGSMANAGEMENT
plugins_PROGRAMS += logs-management.plugin
logs_management_plugin_SOURCES = $(LOGSMANAGEMENT_FILES)
if ENABLE_LOGSMANAGEMENT_TESTS
logs_management_plugin_SOURCES += $(LOGSMANAGEMENT_TESTS_FILES)
endif
logs_management_plugin_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(NULL)
endif
if ENABLE_PLUGIN_EBPF
plugins_PROGRAMS += ebpf.plugin
ebpf_plugin_SOURCES = $(EBPF_PLUGIN_FILES)

View File

@ -48,9 +48,10 @@ RUN find . -type f >/opt/netdata/manifest
RUN CFLAGS="-Og -g -ggdb -Wall -Wextra -Wformat-signedness -DNETDATA_INTERNAL_CHECKS=1\
-DNETDATA_VERIFY_LOCKS=1 ${EXTRA_CFLAGS}" ./netdata-installer.sh --require-cloud --disable-lto
RUN ln -sf /dev/stdout /var/log/netdata/access.log
RUN ln -sf /dev/stdout /var/log/netdata/debug.log
RUN ln -sf /dev/stderr /var/log/netdata/error.log
RUN ln -sf /dev/stdout /var/log/netdata/access.log && \
ln -sf /dev/stdout /var/log/netdata/debug.log && \
ln -sf /dev/stderr /var/log/netdata/error.log && \
ln -sf /dev/stdout /var/log/netdata/fluentbit.log
RUN printf >/opt/netdata/source/gdb_batch '\
set args -D \n\

View File

@ -48,9 +48,10 @@ RUN find . -type f >/opt/netdata/manifest
RUN CFLAGS="-Og -g -ggdb -Wall -Wextra -Wformat-signedness -DNETDATA_INTERNAL_CHECKS=1\
-DNETDATA_VERIFY_LOCKS=1 ${EXTRA_CFLAGS}" ./netdata-installer.sh --require-cloud --disable-lto
RUN ln -sf /dev/stdout /var/log/netdata/access.log
RUN ln -sf /dev/stdout /var/log/netdata/debug.log
RUN ln -sf /dev/stderr /var/log/netdata/error.log
RUN ln -sf /dev/stdout /var/log/netdata/access.log && \
ln -sf /dev/stdout /var/log/netdata/debug.log && \
ln -sf /dev/stderr /var/log/netdata/error.log && \
ln -sf /dev/stdout /var/log/netdata/fluentbit.log
RUN rm /var/lib/netdata/registry/netdata.public.unique.id

View File

@ -47,8 +47,9 @@ RUN find . -type f >/opt/netdata/manifest
RUN CFLAGS="-O1 -ggdb -Wall -Wextra -Wformat-signedness -DNETDATA_INTERNAL_CHECKS=1\
-DNETDATA_VERIFY_LOCKS=1 ${EXTRA_CFLAGS}" ./netdata-installer.sh --disable-lto
RUN ln -sf /dev/stdout /var/log/netdata/access.log
RUN ln -sf /dev/stdout /var/log/netdata/debug.log
RUN ln -sf /dev/stderr /var/log/netdata/error.log
RUN ln -sf /dev/stdout /var/log/netdata/access.log && \
ln -sf /dev/stdout /var/log/netdata/debug.log && \
ln -sf /dev/stderr /var/log/netdata/error.log && \
ln -sf /dev/stdout /var/log/netdata/fluentbit.log
CMD ["/usr/sbin/netdata", "-D"]

View File

@ -29,9 +29,10 @@ RUN find . -type f >/opt/netdata/manifest
RUN CFLAGS="-O1 -ggdb -Wall -Wextra -Wformat-signedness -DNETDATA_INTERNAL_CHECKS=1\
-DNETDATA_VERIFY_LOCKS=1 ${EXTRA_CFLAGS}" ./netdata-installer.sh --disable-lto
RUN ln -sf /dev/stdout /var/log/netdata/access.log
RUN ln -sf /dev/stdout /var/log/netdata/debug.log
RUN ln -sf /dev/stderr /var/log/netdata/error.log
RUN ln -sf /dev/stdout /var/log/netdata/access.log && \
ln -sf /dev/stdout /var/log/netdata/debug.log && \
ln -sf /dev/stderr /var/log/netdata/error.log && \
ln -sf /dev/stdout /var/log/netdata/fluentbit.log
RUN rm /var/lib/netdata/registry/netdata.public.unique.id

View File

@ -401,6 +401,11 @@
#define NETDATA_CHART_PRIO_STATSD_PRIVATE 90000 // many charts
// Logs Management
#define NETDATA_CHART_PRIO_LOGS_BASE 95000 // many charts
#define NETDATA_CHART_PRIO_LOGS_STATS_BASE 160000 // logsmanagement stats in "Netdata Monitoring"
// PCI
#define NETDATA_CHART_PRIO_PCI_AER 100000

View File

@ -91,6 +91,7 @@ go.d.plugin: *go.d.plugin*
slabinfo.plugin: *slabinfo.plugin*
ebpf.plugin: *ebpf.plugin*
debugfs.plugin: *debugfs.plugin*
logs-management.plugin: *logs-management.plugin*
# agent-service-discovery
agent_sd: agent_sd

View File

@ -239,6 +239,13 @@ void *pluginsd_main(void *ptr)
// disable some plugins by default
config_get_boolean(CONFIG_SECTION_PLUGINS, "slabinfo", CONFIG_BOOLEAN_NO);
config_get_boolean(CONFIG_SECTION_PLUGINS, "logs-management",
#if defined(LOGS_MANAGEMENT_STRESS_TEST)
CONFIG_BOOLEAN_YES
#else
CONFIG_BOOLEAN_NO
#endif
);
// it crashes (both threads) on Alpine after we made it multi-threaded
// works with "--device /dev/ipmi0", but this is not default
// see https://github.com/netdata/netdata/pull/15564 for details

View File

@ -479,8 +479,9 @@ static inline PARSER_RC pluginsd_begin(char **words, size_t num_words, PARSER *p
}
static inline PARSER_RC pluginsd_end(char **words, size_t num_words, PARSER *parser) {
UNUSED(words);
UNUSED(num_words);
char *tv_sec = get_word(words, num_words, 1);
char *tv_usec = get_word(words, num_words, 2);
char *pending_rrdset_next = get_word(words, num_words, 3);
RRDHOST *host = pluginsd_require_scope_host(parser, PLUGINSD_KEYWORD_END);
if(!host) return PLUGINSD_DISABLE_PLUGIN(parser, NULL, NULL);
@ -494,9 +495,15 @@ static inline PARSER_RC pluginsd_end(char **words, size_t num_words, PARSER *par
pluginsd_clear_scope_chart(parser, PLUGINSD_KEYWORD_END);
parser->user.data_collections_count++;
struct timeval now;
now_realtime_timeval(&now);
rrdset_timed_done(st, now, /* pending_rrdset_next = */ false);
struct timeval tv = {
.tv_sec = (tv_sec && *tv_sec) ? str2ll(tv_sec, NULL) : 0,
.tv_usec = (tv_usec && *tv_usec) ? str2ll(tv_usec, NULL) : 0
};
if(!tv.tv_sec)
now_realtime_timeval(&tv);
rrdset_timed_done(st, tv, pending_rrdset_next && *pending_rrdset_next ? true : false);
return PARSER_RC_OK;
}

View File

@ -77,6 +77,18 @@ AC_ARG_ENABLE(
,
[enable_plugin_systemd_journal="detect"]
)
AC_ARG_ENABLE(
[logsmanagement],
[AS_HELP_STRING([--disable-logsmanagement], [Disable logsmanagement @<:@default autodetect@:>@])],
,
[enable_logsmanagement="detect"]
)
AC_ARG_ENABLE(
[logsmanagement_tests],
[AS_HELP_STRING([--enable-logsmanagement-tests], [Enable logsmanagement tests @<:@default disabled@:>@])],
,
[enable_logsmanagement_tests="no"]
)
AC_ARG_ENABLE(
[plugin-cups],
[AS_HELP_STRING([--enable-plugin-cups], [enable cups plugin @<:@default autodetect@:>@])],
@ -736,7 +748,7 @@ AC_CHECK_SIZEOF(void *)
if test "$ac_cv_sizeof_void_p" = 8; then
AC_MSG_RESULT(Detected 64-bit Build Environment)
LIBJUDY_CFLAGS="$LIBJUDY_CFLAGS -DJU_64BIT"
else
else
AC_MSG_RESULT(Detected 32-bit Build Environment)
LIBJUDY_CFLAGS="$LIBJUDY_CFLAGS -UJU_64BIT"
fi
@ -938,6 +950,27 @@ if test "${enable_pedantic}" = "yes"; then
CFLAGS="${CFLAGS} -pedantic -Wall -Wextra -Wno-long-long"
fi
# -----------------------------------------------------------------------------
# dlsym check
AC_MSG_CHECKING(whether we can use dlsym)
OLD_LIBS="${LIBS}"
LIBS="-ldl"
AC_LINK_IFELSE([AC_LANG_SOURCE([[
#include <dlfcn.h>
static void *(*libc_malloc)(size_t);
int main() {
libc_malloc = dlsym(RTLD_NEXT, "malloc");
}
]])], CAN_USE_DLSYM=yes, CAN_USE_DLSYM=no)
LIBS="${OLD_LIBS}"
AC_MSG_RESULT($CAN_USE_DLSYM)
if test "x$CAN_USE_DLSYM" = xyes; then
AC_DEFINE([HAVE_DLSYM], [1], [dlsym usability])
OPTIONAL_DL_LIBS="-ldl"
fi
AC_SUBST([OPTIONAL_DL_LIBS])
# -----------------------------------------------------------------------------
# memory allocation library
@ -1163,7 +1196,6 @@ fi
AC_MSG_RESULT([${enable_plugin_apps}])
AM_CONDITIONAL([ENABLE_PLUGIN_APPS], [test "${enable_plugin_apps}" = "yes"])
# -----------------------------------------------------------------------------
# freeipmi.plugin - libipmimonitoring
@ -1562,6 +1594,56 @@ if test "${build_ml}" = "yes"; then
fi
# -----------------------------------------------------------------------------
# logsmanagement
LIBS_BAK="${LIBS}"
# Check if submodules have not been fetched. Fail if Logs Management was explicitly requested.
AC_MSG_CHECKING([if git submodules are present for logs management functionality])
if test -f "fluent-bit/CMakeLists.txt"; then
AC_MSG_RESULT([yes])
have_logsmanagement_submodules="yes"
else
AC_MSG_RESULT([no])
have_logsmanagement_submodules="no"
fi
if test "${enable_logsmanagement}" != "no" -a "${have_logsmanagement_submodules}" = "no"; then
AC_MSG_WARN([Logs management cannot be built because the required git submodules are missing.])
fi
if test "${enable_logsmanagement}" != "no" -a "x$CAN_USE_DLSYM" = xno; then
AC_MSG_WARN([Logs management cannot be built because dlsym cannot be used.])
fi
# Decide if we should build Logs Management
if test "${enable_logsmanagement}" != "no" -a "${have_logsmanagement_submodules}" = "yes" -a "x$CAN_USE_DLSYM" = xyes; then
build_logsmanagement="yes"
else
build_logsmanagement="no"
fi
AM_CONDITIONAL([ENABLE_LOGSMANAGEMENT], [test "${build_logsmanagement}" = "yes"])
if test "${build_logsmanagement}" = "yes"; then
AC_DEFINE([ENABLE_LOGSMANAGEMENT], [1], [enable logs management functionality])
fi
# Decide if we should build Logs Management tests.
if test "${build_logsmanagement}" = "yes" -a "${enable_logsmanagement_tests}" = "yes"; then
build_logsmanagement_tests="yes"
else
build_logsmanagement_tests="no"
fi
AM_CONDITIONAL([ENABLE_LOGSMANAGEMENT_TESTS], [test "${build_logsmanagement_tests}" = "yes"])
if test "${build_logsmanagement_tests}" = "yes"; then
AC_DEFINE([ENABLE_LOGSMANAGEMENT_TESTS], [1], [logs management tests])
fi
LIBS="${LIBS_BAK}"
# -----------------------------------------------------------------------------
# debugfs.plugin
@ -1927,27 +2009,6 @@ AC_LANG_POP([C++])
# -----------------------------------------------------------------------------
AC_MSG_CHECKING(whether we can use dlsym)
OLD_LIBS="${LIBS}"
LIBS="-ldl"
AC_LINK_IFELSE([AC_LANG_SOURCE([[
#include <dlfcn.h>
static void *(*libc_malloc)(size_t);
int main() {
libc_malloc = dlsym(RTLD_NEXT, "malloc");
}
]])], CAN_USE_DLSYM=yes, CAN_USE_DLSYM=no)
LIBS="${OLD_LIBS}"
AC_MSG_RESULT($CAN_USE_DLSYM)
if test "x$CAN_USE_DLSYM" = xyes; then
AC_DEFINE([HAVE_DLSYM], [1], [dlsym usability])
OPTIONAL_DL_LIBS="-ldl"
fi
AC_SUBST([OPTIONAL_DL_LIBS])
# -----------------------------------------------------------------------------
AC_DEFINE_UNQUOTED([NETDATA_USER], ["${with_user}"], [use this user to drop privileged])
@ -2200,6 +2261,7 @@ AC_CONFIG_FILES([
web/server/static/Makefile
claim/Makefile
spawn/Makefile
logsmanagement/Makefile
])
AC_OUTPUT

View File

@ -26,7 +26,9 @@ Build-Depends: debhelper (>= 9.20160709),
automake,
pkg-config,
curl,
protobuf-compiler
protobuf-compiler,
bison,
flex
Section: net
Priority: optional
Maintainer: Netdata Builder <bot@netdata.cloud>
@ -57,7 +59,8 @@ Conflicts: netdata-core,
netdata-web
Suggests: netdata-plugin-cups (= ${source:Version}),
netdata-plugin-freeipmi (= ${source:Version})
Recommends: netdata-plugin-systemd-journal (= ${source:Version})
Recommends: netdata-plugin-systemd-journal (= ${source:Version}),
netdata-plugin-logs-management (= ${source:Version})
Description: real-time charts for system monitoring
Netdata is a daemon that collects data in realtime (per second)
and presents a web site to view and analyze them. The presentation
@ -203,3 +206,13 @@ Conflicts: netdata (<< ${source:Version})
Description: The systemd-journal collector for the Netdata Agent
This plugin allows the Netdata Agent to present logs from the systemd
journal on Netdata Cloud or the local Agent dashboard.
Package: netdata-plugin-logs-management
Architecture: any
Depends: ${shlibs:Depends},
netdata (= ${source:Version})
Pre-Depends: libcap2-bin, adduser
Conflicts: netdata (<< ${source:Version})
Description: The logs-management plugin for the Netdata Agent
This plugin allows the Netdata Agent to collect logs from the system
and parse them to extract metrics.

View File

@ -0,0 +1,17 @@
#!/bin/sh
set -e
case "$1" in
configure|reconfigure)
chown root:netdata /usr/libexec/netdata/plugins.d/logs-management.plugin
chmod 0750 /usr/libexec/netdata/plugins.d/logs-management.plugin
if ! setcap "cap_dac_read_search=eip cap_syslog=eip" /usr/libexec/netdata/plugins.d/logs-management.plugin; then
chmod -f 4750 /usr/libexec/netdata/plugins.d/logs-management.plugin
fi
;;
esac
#DEBHELPER#
exit 0

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -e
case "$1" in
install)
if ! getent group netdata > /dev/null; then
addgroup --quiet --system netdata
fi
;;
esac
#DEBHELPER#

View File

@ -128,7 +128,17 @@ override_dh_install:
# Add systemd-journal plugin install rules
mkdir -p $(TOP)-plugin-systemd-journal/usr/libexec/netdata/plugins.d/
mv -f $(TEMPTOP)/usr/libexec/netdata/plugins.d/systemd-journal.plugin \
$(TOP)-plugin-systemd-journal/usr/libexec/netdata/plugins.d/systemd-journal.plugin; \
$(TOP)-plugin-systemd-journal/usr/libexec/netdata/plugins.d/systemd-journal.plugin
# Add logs-management plugin install rules
mkdir -p $(TOP)-plugin-logs-management/usr/libexec/netdata/plugins.d/
mv -f $(TEMPTOP)/usr/libexec/netdata/plugins.d/logs-management.plugin \
$(TOP)-plugin-logs-management/usr/libexec/netdata/plugins.d/logs-management.plugin
mkdir -p $(TOP)-plugin-logs-management/usr/lib/netdata/conf.d/
mv -f $(TEMPTOP)/usr/lib/netdata/conf.d/logsmanagement.d.conf \
$(TOP)-plugin-logs-management/usr/lib/netdata/conf.d/logsmanagement.d.conf
mv -f $(TEMPTOP)/usr/lib/netdata/conf.d/logsmanagement.d/ \
$(TOP)-plugin-logs-management/usr/lib/netdata/conf.d/logsmanagement.d/
# Set the rest of the software in the main package
#
@ -221,6 +231,9 @@ override_dh_fixperms:
# systemd-journal
chmod 4750 $(TOP)-plugin-systemd-journal/usr/libexec/netdata/plugins.d/systemd-journal.plugin
# systemd-journal
chmod 4750 $(TOP)-plugin-logs-management/usr/libexec/netdata/plugins.d/logs-management.plugin
override_dh_installlogrotate:
cp system/logrotate/netdata debian/netdata.logrotate
dh_installlogrotate

View File

@ -101,6 +101,7 @@ typedef enum __attribute__((packed)) {
BIB_PLUGIN_SLABINFO,
BIB_PLUGIN_XEN,
BIB_PLUGIN_XEN_VBD_ERROR,
BIB_PLUGIN_LOGS_MANAGEMENT,
BIB_EXPORT_AWS_KINESIS,
BIB_EXPORT_GCP_PUBSUB,
BIB_EXPORT_MONGOC,
@ -911,6 +912,14 @@ static struct {
.json = "xen-vbd-error",
.value = NULL,
},
[BIB_PLUGIN_LOGS_MANAGEMENT] = {
.category = BIC_PLUGINS,
.type = BIT_BOOLEAN,
.analytics = "Logs Management",
.print = "Logs Management",
.json = "logs-management",
.value = NULL,
},
[BIB_EXPORT_MONGOC] = {
.category = BIC_EXPORTERS,
.type = BIT_BOOLEAN,
@ -1243,6 +1252,9 @@ __attribute__((constructor)) void initialize_build_info(void) {
#ifdef HAVE_XENSTAT_VBD_ERROR
build_info_set_status(BIB_PLUGIN_XEN_VBD_ERROR, true);
#endif
#ifdef ENABLE_LOGSMANAGEMENT
build_info_set_status(BIB_PLUGIN_LOGS_MANAGEMENT, true);
#endif
build_info_set_status(BIB_EXPORT_PROMETHEUS_EXPORTER, true);
build_info_set_status(BIB_EXPORT_GRAPHITE, true);

View File

@ -3529,6 +3529,7 @@ static struct worker_utilization all_workers_utilization[] = {
{ .name = "TC", .family = "workers plugin tc", .priority = 1000000 },
{ .name = "TIMEX", .family = "workers plugin timex", .priority = 1000000 },
{ .name = "IDLEJITTER", .family = "workers plugin idlejitter", .priority = 1000000 },
{ .name = "LOGSMANAGPLG",.family = "workers plugin logs management", .priority = 1000000 },
{ .name = "RRDCONTEXT", .family = "workers contexts", .priority = 1000000 },
{ .name = "REPLICATION", .family = "workers replication sender", .priority = 1000000 },
{ .name = "SERVICE", .family = "workers service", .priority = 1000000 },

1
fluent-bit Submodule

@ -0,0 +1 @@
Subproject commit b19e9ce674de872640c00a697fa545b66df0628a

View File

@ -113,6 +113,10 @@ inline time_t now_realtime_sec(void) {
return now_sec(CLOCK_REALTIME);
}
inline msec_t now_realtime_msec(void) {
return now_usec(CLOCK_REALTIME) / USEC_PER_MS;
}
inline usec_t now_realtime_usec(void) {
return now_usec(CLOCK_REALTIME);
}

View File

@ -117,6 +117,7 @@ usec_t now_realtime_usec(void);
int now_monotonic_timeval(struct timeval *tv);
time_t now_monotonic_sec(void);
msec_t now_realtime_msec(void);
usec_t now_monotonic_usec(void);
int now_monotonic_high_precision_timeval(struct timeval *tv);
time_t now_monotonic_high_precision_sec(void);

View File

@ -217,4 +217,4 @@ typedef struct _connector_instance {
_CONNECTOR_INSTANCE *add_connector_instance(struct section *connector, struct section *instance);
#endif /* NETDATA_CONFIG_H */
#endif /* NETDATA_CONFIG_H */

View File

@ -214,3 +214,10 @@ void functions_evloop_add_function(struct functions_evloop_globals *wg, const ch
we->default_timeout = default_timeout;
DOUBLE_LINKED_LIST_APPEND_ITEM_UNSAFE(wg->expectations, we, prev, next);
}
void functions_evloop_cancel_threads(struct functions_evloop_globals *wg){
for(size_t i = 0; i < wg->workers ; i++)
netdata_thread_cancel(wg->worker_threads[i]);
netdata_thread_cancel(wg->reader_thread);
}

View File

@ -54,6 +54,7 @@ typedef void (*functions_evloop_worker_execute_t)(const char *transaction, char
struct functions_evloop_worker_job;
struct functions_evloop_globals *functions_evloop_init(size_t worker_threads, const char *tag, netdata_mutex_t *stdout_mutex, bool *plugin_should_exit);
void functions_evloop_add_function(struct functions_evloop_globals *wg, const char *function, functions_evloop_worker_execute_t cb, time_t default_timeout);
void functions_evloop_cancel_threads(struct functions_evloop_globals *wg);
#define pluginsd_function_result_begin_to_buffer(wb, transaction, code, content_type, expires) \

View File

@ -0,0 +1,32 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
userlogsmanagconfigdir=$(configdir)/logsmanagement.d
# Explicitly install directories to avoid permission issues due to umask
install-exec-local:
$(INSTALL) -d $(DESTDIR)$(userlogsmanagconfigdir)
dist_libconfig_DATA = \
stock_conf/logsmanagement.d.conf \
$(NULL)
logsmanagconfigdir=$(libconfigdir)/logsmanagement.d
dist_logsmanagconfig_DATA = \
stock_conf/logsmanagement.d/default.conf \
stock_conf/logsmanagement.d/example_forward.conf \
stock_conf/logsmanagement.d/example_mqtt.conf \
stock_conf/logsmanagement.d/example_serial.conf \
stock_conf/logsmanagement.d/example_syslog.conf \
$(NULL)
dist_noinst_DATA = \
README.md \
stress_test/logrotate.conf \
stress_test/logs_query.html \
stress_test/run_stress_test.sh \
stress_test/stress_test.c \
$(NULL)

671
logsmanagement/README.md Normal file
View File

@ -0,0 +1,671 @@
# Logs Management
## Table of Contents
- [Summary](#summary)
- [Types of available log collectors](#collector-types)
- [Getting Started](#getting-started)
- [Package Requirements](#package-requirements)
- [General Configuration](#general-configuration)
- [Collector-specific Configuration](#collector-configuration)
- [Kernel logs (kmsg)](#collector-configuration-kmsg)
- [Systemd](#collector-configuration-systemd)
- [Docker events](#collector-configuration-docker-events)
- [Tail](#collector-configuration-tail)
- [Web log](#collector-configuration-web-log)
- [Syslog socket](#collector-configuration-syslog)
- [Serial](#collector-configuration-serial)
- [MQTT](#collector-configuration-mqtt)
- [Custom Charts](#custom-charts)
- [Streaming logs to Netdata](#streaming-in)
- [Example: Systemd log streaming](#streaming-systemd)
- [Example: Kernel log streaming](#streaming-kmsg)
- [Example: Generic log streaming](#streaming-generic)
- [Example: Docker Events log streaming](#streaming-docker-events)
- [Streaming logs from Netdata (exporting)](#streaming-out)
- [Troubleshooting](#troubleshooting)
<a name="summary"/>
## Summary
</a>
The Netdata logs management engine enables collection, processing, storage, streaming and querying of logs through the Netdata agent. The following pipeline depicts a high-level overview of the different stages that collected logs propagate through for this to be achieved:
![Logs management pipeline](https://github.com/netdata/netdata/assets/5953192/dd73382c-af4b-4840-a3fe-1ba5069304e8 "Logs management pipeline")
The [Fluent Bit](https://github.com/fluent/fluent-bit) project has been used as the logs collection and exporting / streaming engine, due to its stability and the variety of [collection (input) plugins](https://docs.fluentbit.io/manual/pipeline/inputs) that it offers. Each collected log record passes through the Fluent Bit engine first, before it gets buffered, parsed, compressed and (optionally) stored locally by the logs management engine. It can also be streamed to another Netdata or Fluent Bit instance (using Fluent Bit's [Forward](https://docs.fluentbit.io/manual/pipeline/outputs/forward) protocol), or exported using any other [Fluent Bit output](https://docs.fluentbit.io/manual/pipeline/outputs).
A bespoke circular buffering implementation has been used to maximize performance and optimize memory utilization. More technical details about how it works can be found [here](https://github.com/netdata/netdata/pull/13291#buffering).
To configure Netdata's logs management engine properly, please make sure you are aware of the following points first:
* One collection cycle (at max) occurs per `update every` interval (in seconds - minimum 1 sec) and any log records collected in a collection cycle are grouped together (for compression and performance purposes). As a result of this, a longer `update every` interval will reduce memory and disk space requirements.
* When collected logs contain parsable timestamps, these will be used to display metrics from parsed logs at the correct time in each chart, even if collection of said logs takes place *much* later than the time they were produced. How much later? Up to a configurable value of `update timeout` seconds. This mechanism ensures correct parsing and querying of delayed logs that contain parsable timestamps (such as streamed inputs or buffered logs sources that write logs in batches), but the respective charts may lag behind some seconds up to that timeout. If no parsable timestamp is found, the collection timestamp will be used instead (or the collector can be forced to always use the collection timestamp by setting `use log timestamp = no`).
<a name="collector-types"/>
### Types of available log collectors
</a>
The following log collectors are supported at the moment. The table will be updated as more collectors are added:
| Collector | Log type | Description |
| ------------ | ------------ | ------------ |
| kernel logs (kmsg) | `flb_kmsg` | Collection of new kernel ring buffer logs.|
| systemd | `flb_systemd` | Collection of journald logs.|
| docker events | `flb_docker_events` | Collection of docker events logs, similar to executing the `docker events` command.|
| tail | `flb_tail` | Collection of new logs from files by "tailing" them, similar to `tail -f`.|
| web log | `flb_web_log` | Collection of Apache or Nginx access logs.|
| syslog socket | `flb_syslog` | Collection of RFC-3164 syslog logs by creating listening sockets.|
| serial | `flb_serial` | Collection of logs from a serial interface.|
| mqtt | `flb_mqtt` | Collection of MQTT messages over a TCP connection.|
<a name="getting-started"/>
## Getting Started
</a>
Since version `XXXXX`, Netdata is distributed with logs management functionality as an external plugin, but it is disabled by default and must be explicitly enabled using `./edit-config netdata.conf` and changing the respective configuration option:
```
[plugins]
logs-management = yes
```
There are some pre-configured log sources that Netdata will attempt to automatically discover and monitor that can be edited using `./edit-config logsmanagement.d/default.conf` in Netdata's configuration directory. More sources can be configured for monitoring by adding them in `logsmanagement.d/default.conf` or in other `.conf` files in the `logsmanagement.d` directory.
There are also some example configurations that can be listed using `./edit-config --list`.
To get familiar with the Logs Management functionality, the user is advised to read at least the [Summary](#summary) and the [General Configuration](#general-configuration) sections and also any [Collector-specific Configuration](#collector-configuration) subsections, according to each use case.
For any issues, please refer to [Troubleshooting](#troubleshooting) or open a new support ticket on [Github](https://github.com/netdata/netdata/issues) or one of Netdata's support channels.
<a name="package-requirements"/>
## Package Requirements
</a>
Netdata logs management introduces minimal additional package dependencies and those are actually [Fluent Bit dependencies](https://docs.fluentbit.io/manual/installation/requirements). The only extra build-time dependencies are:
- `flex`
- `bison`
- `musl-fts-dev` ([Alpine Linux](https://www.alpinelinux.org/about) only)
However, there may be some exceptions to this rule as more collectors are added to the logs management engine, so if a specific collector is disabled due to missing dependencies, please refer to this section or check [Troubleshooting](#troubleshooting).
<a name="general-configuration"/>
## General Configuration
</a>
There are some fundamental configuration options that are common to all log collector types. These options can be set globally in `logsmanagement.d.conf` or they can be customized per log source:
| Configuration Option | Default | Description |
| :------------: | :------------: | ------------ |
| `update every` | Equivalent value in `logsmanagement.d.conf` (or in `netdata.conf` under `[plugin:logs-management]`, if higher). | How often metrics in charts will be updated every (in seconds).
| `update timeout` | Equivalent value in `[logs management]` section of `netdata.conf` (or Netdata global value, if higher). | Maximum timeout charts may be delayed by while waiting for new logs.
| `use log timestamp` | Equivalent value in `logsmanagement.d.conf` (`auto` by default). | If set to `auto`, log timestamps (when available) will be used for precise metrics aggregation. Otherwise (if set to `no`), collection timestamps will be used instead (which may result in lagged metrics under heavy system load, but it will reduce CPU usage).
| `log type` | `flb_tail` | Type of this log collector, see [relevant table](#collector-types) for a complete list of supported collectors.
| `circular buffer max size` | Equivalent value in `logsmanagement.d.conf`. | Maximum RAM that can be used to buffer collected logs until they are saved to the disk database.
| `circular buffer drop logs if full` | Equivalent value in `logsmanagement.d.conf` (`no` by default). | If there are new logs pending to be collected and the circular buffer is full, enabling this setting will allow old buffered logs to be dropped in favor of new ones. If disabled, collection of new logs will be blocked until there is free space again in the buffer (no logs will be lost in this case, but logs will not be ingested in real-time).
| `compression acceleration` | Equivalent value in `logsmanagement.d.conf` (`1` by default). | Fine-tunes tradeoff between log compression speed and compression ratio, see [here](https://github.com/lz4/lz4/blob/90d68e37093d815e7ea06b0ee3c168cccffc84b8/lib/lz4.h#L195) for more details.
| `db mode` | Equivalent value in `logsmanagement.d.conf` (`none` by default). | Mode of logs management database per collector. If set to `none`, logs will be collected, buffered, parsed and then discarded. If set to `full`, buffered logs will be saved to the logs management database instead of being discarded. When mode is `none`, logs management queries cannot be executed.
| `buffer flush to DB` | Equivalent value in `logsmanagement.d.conf` (`6` by default). | Interval in seconds at which logs will be transferred from RAM buffers to the database.
| `disk space limit` | Equivalent value in `logsmanagement.d.conf` (`500 MiB` by default). | Maximum disk space that all compressed logs in database can occupy (per log source). Once exceeded, oldest BLOB of logs will be truncated for new logs to be written over. Each log source database can contain a maximum of 10 BLOBs at any point, so each truncation equates to a deletion of about 10% of the oldest logs. The number of BLOBS will be configurable in a future release.
| `collected logs total chart enable` | Equivalent value in `logsmanagement.d.conf` (`no` by default). | Chart that shows the number of log records collected for this log source, since the last Netdata agent restart. Useful for debugging purposes.
| `collected logs rate chart enable` | Equivalent value in `logsmanagement.d.conf` (`yes` by default). | Chart that shows the rate that log records are collected at for this log source.
There are also one setting that cannot be set per log source, but can only be defined in `logsmanagement.d.conf`:
| Configuration Option | Default | Description |
| :------------: | :------------: | ------------ |
| `db dir` | `/var/cache/netdata/logs_management_db` | Logs management database path, will be created if it does not exist.|
> **Note**
> `log path` must be defined per log source for any collector type, except for `kmsg` and the collectors that listen to network sockets. Some default examples use `log path = auto`. In those cases, an autodetection of the path will be attempted by searching through common paths where each log source is typically expected to be found.
<a name="collector-configuration"/>
## Collector-specific Configuration
</a>
<a name="collector-configuration-kmsg"/>
### Kernel logs (kmsg)
</a>
This collector will collect logs from the kernel message log buffer. See also documentation of [Fluent Bit kmsg input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/kernel-logs).
> **Warning**
> If `use log timestamp` is set to `auto` and the system has been in suspend and resumed since the last boot, timestamps of new `kmsg` logs will be incorrect and log collection will not work. This is a know limitation when reading the kernel log buffer records and it is recommended to use `use log timestamp = no` in this case.
> **Note**
> `/dev/kmsg` normally returns all the logs in the kernel log buffer every time it is read. To avoid duplicate logs, the collector will discard any previous logs the first time `/dev/kmsg` is read after an agent restart and it will collect only new kernel logs.
| Configuration Option | Description |
| :------------: | ------------ |
| `severity chart` | Enable chart showing Syslog Severity values of collected logs. Severity values are in the range of 0 to 7 inclusive.|
| `subsystem chart` | Enable chart showing which subsystems generated the logs.|
| `device chart` | Enable chart showing which devices generated the logs.|
<a name="collector-configuration-systemd"/>
### Systemd
</a>
This collector will collect logs from the journald daemon. See also documentation of [Fluent Bit systemd input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/systemd).
| Configuration Option | Description |
| :------------: | ------------ |
| `log path` | Path to the systemd journal directory. If set to `auto`, the default path will be used to read local-only logs. |
| `priority value chart` | Enable chart showing Syslog Priority values (PRIVAL) of collected logs. The Priority value ranges from 0 to 191 and represents both the Facility and Severity. It is calculated by first multiplying the Facility number by 8 and then adding the numerical value of the Severity. Please see the [rfc5424: Syslog Protocol](https://www.rfc-editor.org/rfc/rfc5424#section-6.2.1) document for more information.|
| `severity chart` | Enable chart showing Syslog Severity values of collected logs. Severity values are in the range of 0 to 7 inclusive.|
| `facility chart` | Enable chart showing Syslog Facility values of collected logs. Facility values show which subsystem generated the log and are in the range of 0 to 23 inclusive.|
<a name="collector-configuration-docker-events"/>
### Docker events
</a>
This collector will use the Docker API to collect Docker events logs. See also documentation of [Fluent Bit docker events input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/docker-events).
| Configuration Option | Description |
| :------------: | ------------ |
| `log path` | Docker socket UNIX path. If set to `auto`, the default path (`/var/run/docker.sock`) will be used. |
| `event type chart` | Enable chart showing the Docker object type of the collected logs. |
| `event action chart` | Enable chart showing the Docker object action of the collected logs. |
<a name="collector-configuration-tail"/>
### Tail
</a>
This collector will collect any type of logs from a log file, similar to executing the `tail -f` command. See also documentation of [Fluent Bit tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail).
| Configuration Option | Description |
| :------------: | ------------ |
| `log path` | The path to the log file to be monitored. |
| `use inotify` | Select between inotify and file stat watchers (providing `libfluent-bit.so` has been built with inotify support). It defaults to `yes`. Set to `no` if abnormally high CPU usage is observed or if the log source is expected to consistently produce tens of thousands of (unbuffered) logs per second. |
<a name="collector-configuration-web-log"/>
### Web log
</a>
This collector will collect [Apache](https://httpd.apache.org/) and [Nginx](https://nginx.org/) access logs.
| Configuration Option | Description |
| :------------: | ------------ |
| `log path` | The path to the web server's `access.log`. If set to `auto`, the collector will attempt to auto-discover it, provided the name of the configuration section is either `Apache access.log` or `Nginx access.log`. |
| `use inotify` | Select between inotify and file stat watchers (providing `libfluent-bit.so` has been built with inotify support). It defaults to `yes`. Set to `no` if abnormally high CPU usage is observed or if the log source is expected to consistently produce tens of thousands of (unbuffered) logs per second. |
| `log format` | The log format to be used for parsing. Unlike the [`GO weblog`]() module, only the `CSV` parser is supported and it can be configured [in the same way](https://github.com/netdata/go.d.plugin/blob/master/modules/weblog/README.md#known-fields) as in the `GO` module. If set to `auto`, the collector will attempt to auto-detect the log format using the same logic explained [here](https://github.com/netdata/go.d.plugin/blob/master/modules/weblog/README.md#log-parser-auto-detection). |
| `verify parsed logs` | If set to `yes`, the parser will attempt to verify that the parsed fields are valid, before extracting metrics from them. If they are invalid (for example, the response code is less than `100`), the `invalid` dimension will be incremented instead. Setting this to `no` will result in a slight performance gain. |
| `vhosts chart` | Enable chart showing names of the virtual hosts extracted from the collected logs. |
| `ports chart` | Enable chart showing port numbers extracted from the collected logs. |
| `IP versions chart` | Enable chart showing IP versions (`v4` or `v6`) extracted from the collected logs. |
| `unique client IPs - current poll chart` | Enable chart showing unique client IPs in each collection interval. |
| `unique client IPs - all-time chart` | Enable chart showing unique client IPs since agent startup. It is recommended to set this to `no` as it can have a negative impact on long-term performance. |
| `http request methods chart` | Enable chart showing HTTP request methods extracted from the collected logs. |
| `http protocol versions chart` | Enable chart showing HTTP protocol versions exctracted from the collected logs. |
| `bandwidth chart` | Enable chart showing request and response bandwidth extracted from the collected logs. |
| `timings chart` | Enable chart showing request processing time stats extracted from the collected logs. |
| `response code families chart` | Enable chart showing response code families (`1xx`, `2xx` etc.) extracted from the collected logs. |
| `response codes chart` | Enable chart showing response codes extracted from the collected logs. |
| `response code types chart` | Enable chart showing response code types (`success`, `redirect` etc.) extracted from the collected logs. |
| `SSL protocols chart` | Enable chart showing SSL protocols (`TLSV1`, `TLSV1.1` etc.) exctracted from the collected logs. |
| `SSL chipher suites chart` | Enable chart showing SSL chipher suites exctracted from the collected logs. |
<a name="collector-configuration-syslog"/>
### Syslog socket
</a>
This collector will collect logs through a Unix socket server (UDP or TCP) or over the network using TCP or UDP. See also documentation of [Fluent Bit syslog input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/syslog).
| Configuration Option | Description |
| :------------: | ------------ |
| `mode` | Type of socket to be created to listen for incoming syslog messages. Supported modes are: `unix_tcp`, `unix_udp`, `tcp` and `udp`.|
| `log path` | If `mode == unix_tcp` or `mode == unix_udp`, Netdata will create a UNIX socket on this path to listen for syslog messages. Otherwise, this option is not used.|
| `unix_perm` | If `mode == unix_tcp` or `mode == unix_udp`, this sets the permissions of the generated UNIX socket. Otherwise, this option is not used.|
| `listen` | If `mode == tcp` or `mode == udp`, this sets the network interface to bind.|
| `port` | If `mode == tcp` or `mode == udp`, this specifies the port to listen for incoming connections.|
| `log format` | This is a Ruby Regular Expression to define the expected syslog format. Fluent Bit provides some [pre-configured syslog parsers](https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf#L65). |
|`priority value chart` | Please see the respective [systemd](#collector-configuration-systemd) configuration.|
| `severity chart` | Please see the respective [systemd](#collector-configuration-systemd) configuration.|
| `facility chart` | Please see the respective [systemd](#collector-configuration-systemd) configuration.|
For parsing and metrics extraction to work properly, please ensure fields `<PRIVAL>`, `<SYSLOG_TIMESTAMP>`, `<HOSTNAME>`, `<SYSLOG_IDENTIFIER>`, `<PID>` and `<MESSAGE>` are defined in `log format`. For example, to parse incoming `syslog-rfc3164` logs, the following regular expression can be used:
```
/^\<(?<PRIVAL>[0-9]+)\>(?<SYSLOG_TIMESTAMP>[^ ]* {1,2}[^ ]* [^ ]* )(?<HOSTNAME>[^ ]*) (?<SYSLOG_IDENTIFIER>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<PID>[0-9]+)\])?(?:[^\:]*\:)? *(?<MESSAGE>.*)$/
```
<a name="collector-configuration-serial"/>
### Serial
</a>
This collector will collect logs through a serial interface. See also documentation of [Fluent Bit serial interface input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/serial-interface).
| Configuration Option | Description |
| :------------: | ------------ |
| `log path` | Absolute path to the device entry, e.g: `/dev/ttyS0`.|
| `bitrate` | The bitrate for the communication, e.g: 9600, 38400, 115200, etc..|
| `min bytes` | The minimum bytes the serial interface will wait to receive before it begines to process the log message.|
| `separator` | An optional separator string to determine the end of a log message.|
| `format` | Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.|
<a name="collector-configuration-mqtt"/>
### MQTT
</a>
This collector will collect MQTT data over a TCP connection, by spawning an MQTT server through Fluent Bit. See also documentation of [Fluent Bit MQTT input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/mqtt).
| Configuration Option | Description |
| :------------: | ------------ |
| `listen` | Specifies the network interface to bind.|
| `port` | Specifies the port to listen for incoming connections.|
| `topic chart` | Enable chart showing MQTT topic of incoming messages.|
<a name="custom-charts"/>
## Custom Charts
</a>
In addition to the predefined charts, each log source supports the option to extract
user-defined metrics, by matching log records to [POSIX Extended Regular Expressions](https://en.wikibooks.org/wiki/Regular_Expressions/POSIX-Extended_Regular_Expressions).
This can be very useful particularly for `FLB_TAIL` type log sources, where
there is no parsing at all by default.
To create a custom chart, the following key-value configuration options must be
added to the respective log source configuration section:
```
custom 1 chart = identifier
custom 1 regex name = kernel
custom 1 regex = .*\bkernel\b.*
custom 1 ignore case = no
```
where the value denoted by:
- `custom x chart` is the title of the chart.
- `custom x regex name` is an optional name for the dimension of this particular metric (if absent, the regex will be used as the dimension name instead).
- `custom x regex` is the POSIX Extended Regular Expression to be used to match log records.
- `custom x ignore case` is equivalent to setting `REG_ICASE` when using POSIX Extended Regular Expressions for case insensitive searches. It is optional and defaults to `yes`.
`x` must start from number 1 and monotonically increase by 1 every time a new regular expression is configured.
If the titles of two or more charts of a certain log source are the same, the dimensions will be grouped together
in the same chart, rather than a new chart being created.
Example of configuration for a generic log source collection with custom regex-based parsers:
```
[Auth.log]
## Example: Log collector that will tail auth.log file and count
## occurences of certain `sudo` commands, using POSIX regular expressions.
## Required settings
enabled = no
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /var/log/auth.log
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
custom 1 chart = sudo and su
custom 1 regex name = sudo
custom 1 regex = \bsudo\b
custom 1 ignore case = yes
custom 2 chart = sudo and su
# custom 2 regex name = su
custom 2 regex = \bsu\b
custom 2 ignore case = yes
custom 3 chart = sudo or su
custom 3 regex name = sudo or su
custom 3 regex = \bsudo\b|\bsu\b
custom 3 ignore case = yes
```
And the generated charts based on this configuration:
![Auth.log](https://user-images.githubusercontent.com/5953192/197003292-13cf2285-c614-42a1-ad5a-896370c22883.PNG)
<a name="streaming-in"/>
## Streaming logs to Netdata
</a>
Netdata supports 2 incoming streaming configurations:
1. `syslog` messages over Unix or network sockets.
2. Fluent Bit's [Forward protocol](https://docs.fluentbit.io/manual/pipeline/outputs/forward).
For option 1, please refer to the [syslog collector](#collector-configuration-syslog) section. This section will be focused on using option 2.
A Netdata agent can be used as a logs aggregation parent to listen to `Forward` messages, using either Unix or network sockets. This option is separate to [Netdata's metrics streaming](https://github.com/netdata/netdata/blob/master/docs/metrics-storage-management/enable-streaming.md) and can be used independently of whether that's enabled or not (and it uses a different listening socket too).
This setting can be enabled under the `[forward input]` section in `logsmanagement.d.conf`:
```
[forward input]
enable = no
unix path =
unix perm = 0644
listen = 0.0.0.0
port = 24224
```
The default settings will listen for incoming `Forward` messages on TCP port 24224. If `unix path` is set to a valid path, `listen` and `port` will be ignored and a unix socket will be created under that path. Make sure that `unix perm` has the correct permissions set for that unix socket. Please also see Fluent Bit's [Forward input plugin documentation](https://docs.fluentbit.io/manual/pipeline/inputs/forward).
The Netdata agent will now listen for incoming `Forward` messages, but by default it won't process or store them. To do that, there must exist at least one log collection, to define how the incoming logs will be processed and stored. This is similar to configuring a local log source, with the difference that `log source = forward` must be set and also a `stream guid` must be defined, matching that of the children log sources.
The rest of this section contains some examples on how to configure log collections of different types, using a Netdata parent and Fluent Bit children instances (see also `./edit-config logsmanagement.d/example_forward.conf`). Please use the recommended settings on children instances for parsing on parents to work correctly. Also, note that `Forward` output on children supports optional `gzip` compression, by using the `-p Compress=gzip` configuration parameter, as demonstrated in some of the examples.
<a name="streaming-systemd"/>
### Example: Systemd log streaming
</a>
Example configuration of an `flb_docker_events` type parent log collection:
```
[Forward systemd]
## Required settings
enabled = yes
log type = flb_systemd
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30735
## Other settings specific to this log source type
priority value chart = yes
severity chart = yes
facility chart = yes
```
Any children can be configured as follows:
```
fluent-bit -i systemd -p Read_From_Tail=on -p Strip_Underscores=on -o forward -p Compress=gzip -F record_modifier -p 'Record="stream guid" 6ce266f5-2704-444d-a301-2423b9d30735' -m '*'
```
<a name="streaming-kmsg"/>
### Example: Kernel log streaming
</a>
Example configuration of an `flb_kmsg` type parent log collection:
```
[Forward kmsg]
## Required settings
enabled = yes
log type = flb_kmsg
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
use log timestamp = no
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30736
## Other settings specific to this log source type
severity chart = yes
subsystem chart = yes
device chart = yes
```
Any children can be configured as follows:
```
fluent-bit -i kmsg -o forward -p Compress=gzip -F record_modifier -p 'Record="stream guid" 6ce266f5-2704-444d-a301-2423b9d30736' -m '*'
```
> **Note**
> Fluent Bit's `kmsg` input plugin will collect all kernel logs since boot every time it's started up. Normally, when configured as a local source in a Netdata agent, all these initially collected logs will be discarded at startup so they are not duplicated. This is not possible when streaming from a Fluent Bit child, so every time a child is restarted, all kernel logs since boot will be re-collected and streamed again.
<a name="streaming-generic"/>
### Example: Generic log streaming
</a>
This is the most flexible option for a parent log collection, as it allows aggregation of logs from multiple children Fluent Bit instances of different log types. Example configuration of a generic parent log collection with `db mode = full`:
```
[Forward collection]
## Required settings
enabled = yes
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
db mode = full
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30738
```
Children can be configured to `tail` local logs using Fluent Bit and stream them to the parent:
```
fluent-bit -i tail -p Path=/tmp/test.log -p Inotify_Watcher=true -p Refresh_Interval=1 -p Key=msg -o forward -p Compress=gzip -F record_modifier -p 'Record="stream guid" 6ce266f5-2704-444d-a301-2423b9d30738' -m '*'
```
Children instances do not have to use the `tail` input plugin specifically. Any of the supported log types can be used for the streaming child. The following configuration for example can stream `systemd` logs to the same parent as the configuration above:
```
fluent-bit -i systemd -p Read_From_Tail=on -p Strip_Underscores=on -o forward -p Compress=gzip -F record_modifier -p 'Record="stream guid" 6ce266f5-2704-444d-a301-2423b9d30738' -m '*'
```
The caveat is that an `flb_tail` log collection on a parent won't generate any type-specific charts by default, but [custom charts](#custom-charts) can be of course manually added by the user.
<a name="streaming-docker-events"/>
### Example: Docker Events log streaming
</a>
Example configuration of a `flb_docker_events` type parent log collection:
```
[Forward Docker Events]
## Required settings
enabled = yes
log type = flb_docker_events
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30737
## Other settings specific to this log source type
event type chart = yes
```
Any children streaming to this collection must be set up to use one of the [default `json` or `docker` parsers](https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf), to send the collected log as structured messages, so they can be parsed by the parent:
```
fluent-bit -R ~/fluent-bit/conf/parsers.conf -i docker_events -p Parser=json -o forward -F record_modifier -p 'Record="stream guid" 6ce266f5-2704-444d-a301-2423b9d30737' -m '*'
```
or
```
fluent-bit -R ~/fluent-bit/conf/parsers.conf -i docker_events -p Parser=docker -o forward -F record_modifier -p 'Record="stream guid" 6ce266f5-2704-444d-a301-2423b9d30737' -m '*'
```
If instead the user desires to stream to a parent that collects logs into an `flb_tail` log collection, then a parser is not necessary and the unstructured logs can also be streamed in their original JSON format:
```
fluent-bit -i docker_events -o forward -F record_modifier -p 'Record="stream guid 6ce266f5-2704-444d-a301-2423b9d30737' -m '*'
```
Logs will appear in the parent in their unstructured format:
```
{"status":"create","id":"de2432a4f00bd26a4899dde5633bb16090a4f367c36f440ebdfdc09020cb462d","from":"hello-world","Type":"container","Action":"create","Actor":{"ID":"de2432a4f00bd26a4899dde5633bb16090a4f367c36f440ebdfdc09020cb462d","Attributes":{"image":"hello-world","name":"lucid_yalow"}},"scope":"local","time":1680263414,"timeNano":1680263414473911042}
```
<a name="streaming-out"/>
## Streaming logs from Netdata (exporting)
</a>
Netdata supports real-time log streaming and exporting through any of [Fluent Bit's outgoing streaming configurations](https://docs.fluentbit.io/manual/pipeline/outputs).
To use any of the outputs, follow Fluent Bit's documentation with the addition of a `output x` prefix to all of the configuration parameters of the output. `x` must start from number 1 and monotonically increase by 1 every time a new output is configured for the log source.
For example, the following configuration will add 2 outputs to a `docker events` log collector. The first output will stream logs to https://cloud.openobserve.ai/ using Fluent Bit's [http output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/http) and the second one will save the same logs in a file in CSV format, using Fluent Bit's [file output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/file):
```
[Docker Events Logs]
## Example: Log collector that will monitor the Docker daemon socket and
## collect Docker event logs in a default format similar to executing
## the `sudo docker events` command.
## Required settings
enabled = yes
log type = flb_docker_events
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Use default Docker socket UNIX path: /var/run/docker.sock
log path = auto
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
event type chart = yes
event action chart = yes
## Stream to https://cloud.openobserve.ai/
output 1 name = http
output 1 URI = YOUR_API_URI
output 1 Host = api.openobserve.ai
output 1 Port = 443
output 1 tls = On
output 1 Format = json
output 1 Json_date_key = _timestamp
output 1 Json_date_format = iso8601
output 1 HTTP_User = test@netdata.cloud
output 1 HTTP_Passwd = YOUR_OPENOBSERVE_PASSWORD
output 1 compress = gzip
## Real-time export to /tmp/docker_event_logs.csv
output 2 name = file
output 2 Path = /tmp
output 2 File = docker_event_logs.csv
```
</a>
<a name="troubleshooting"/>
## Troubleshooting
</a>
1. I am building Netdata from source or a Git checkout but the `FLB_SYSTEMD` plugin is not available / does not work:
If during the Fluent Bit build step you are seeing the following message:
```
-- Could NOT find Journald (missing: JOURNALD_LIBRARY JOURNALD_INCLUDE_DIR)
```
it means that the systemd development libraries are missing from your system. Please see [how to install them alongside other required packages](https://github.com/netdata/netdata/blob/master/packaging/installer/methods/manual.md).
2. I am observing very high CPU usage when monitoring a log source using `flb_tail` or `flb_web_log`.
The log source is probably producing a very high number of unbuffered logs, which results in too many filesystem events. Try setting `use inotify = no` to use file stat watchers instead.
3. I am using Podman instead of Docker, but I cannot see any Podman events logs being collected.
Please ensure there is a listening service running that answers API calls for Podman. Instructions on how to start such a service can be found [here](https://docs.podman.io/en/latest/markdown/podman-system-service.1.html).
Once the service is started, you must updated the Docker events logs collector `log path` to monitor the generated socket (otherwise, it will search for a `dock.sock` by default).
You must ensure `podman.sock` has the right permissions for Netdata to be able to access it.

View File

@ -0,0 +1,404 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file circular_buffer.c
* @brief This is the implementation of a circular buffer to be used
* for saving collected logs in memory, until they are stored
* into the database.
*/
#include "circular_buffer.h"
#include "helper.h"
#include "parser.h"
struct qsort_item {
Circ_buff_item_t *cbi;
struct File_info *pfi;
};
static int qsort_timestamp (const void *item_a, const void *item_b) {
return ( (int64_t)((struct qsort_item*)item_a)->cbi->timestamp -
(int64_t)((struct qsort_item*)item_b)->cbi->timestamp);
}
static int reverse_qsort_timestamp (const void * item_a, const void * item_b) {
return -qsort_timestamp(item_a, item_b);
}
/**
* @brief Search circular buffers according to the query_params.
* @details If multiple buffers are to be searched, the results will be sorted
* according to timestamps.
*
* Note that buff->tail can only be changed through circ_buff_read_done(), and
* circ_buff_search() and circ_buff_read_done() are mutually exclusive due
* to uv_mutex_lock() and uv_mutex_unlock() in queries and when writing to DB.
*
* @param p_query_params Query parameters to search according to.
* @param p_file_infos File_info structs to be searched.
*/
void circ_buff_search(logs_query_params_t *const p_query_params, struct File_info *const p_file_infos[]) {
for(int pfi_off = 0; p_file_infos[pfi_off]; pfi_off++)
uv_rwlock_rdlock(&p_file_infos[pfi_off]->circ_buff->buff_realloc_rwlock);
int buffs_size = 0,
buff_max_num_of_items = 0;
while(p_file_infos[buffs_size]){
if(p_file_infos[buffs_size]->circ_buff->num_of_items > buff_max_num_of_items)
buff_max_num_of_items = p_file_infos[buffs_size]->circ_buff->num_of_items;
buffs_size++;
}
struct qsort_item items[buffs_size * buff_max_num_of_items + 1]; // worst case allocation
int items_off = 0;
for(int buff_off = 0; p_file_infos[buff_off]; buff_off++){
Circ_buff_t *buff = p_file_infos[buff_off]->circ_buff;
/* TODO: The following 3 operations need to be replaced with a struct
* to gurantee atomicity. */
int head = __atomic_load_n(&buff->head, __ATOMIC_SEQ_CST) % buff->num_of_items;
int tail = __atomic_load_n(&buff->tail, __ATOMIC_SEQ_CST) % buff->num_of_items;
int full = __atomic_load_n(&buff->full, __ATOMIC_SEQ_CST);
if ((head == tail) && !full) continue; // Nothing to do if buff is empty
for (int i = tail; i != head; i = (i + 1) % buff->num_of_items){
items[items_off].cbi = &buff->items[i];
items[items_off++].pfi = p_file_infos[buff_off];
}
}
items[items_off].cbi = NULL;
items[items_off].pfi = NULL;
if(items[0].cbi)
qsort(items, items_off, sizeof(items[0]), p_query_params->order_by_asc ? qsort_timestamp : reverse_qsort_timestamp);
BUFFER *const res_buff = p_query_params->results_buff;
logs_query_res_hdr_t res_hdr = { // result header
.timestamp = p_query_params->act_to_ts,
.text_size = 0,
.matches = 0,
.log_source = "",
.log_type = ""
};
for (int i = 0; items[i].cbi; i++) {
/* If exceeding quota or timeout is reached and new timestamp is different than previous,
* terminate query but inform caller about act_to_ts to continue from (its next value) in next call. */
if((res_buff->len >= p_query_params->quota || now_monotonic_usec() > p_query_params->stop_monotonic_ut) &&
items[i].cbi->timestamp != res_hdr.timestamp){
p_query_params->act_to_ts = res_hdr.timestamp;
break;
}
res_hdr.timestamp = items[i].cbi->timestamp;
res_hdr.text_size = items[i].cbi->text_size;
strncpyz(res_hdr.log_source, log_src_t_str[items[i].pfi->log_source], sizeof(res_hdr.log_source) - 1);
strncpyz(res_hdr.log_type, log_src_type_t_str[items[i].pfi->log_type], sizeof(res_hdr.log_type) - 1);
strncpyz(res_hdr.basename, items[i].pfi->file_basename, sizeof(res_hdr.basename) - 1);
strncpyz(res_hdr.filename, items[i].pfi->filename, sizeof(res_hdr.filename) - 1);
strncpyz(res_hdr.chartname, items[i].pfi->chartname, sizeof(res_hdr.chartname) - 1);
if (p_query_params->order_by_asc ?
( res_hdr.timestamp >= p_query_params->req_from_ts && res_hdr.timestamp <= p_query_params->req_to_ts ) :
( res_hdr.timestamp >= p_query_params->req_to_ts && res_hdr.timestamp <= p_query_params->req_from_ts) ){
/* In case of search_keyword, less than sizeof(res_hdr) + temp_msg.text_size
* space is required, but go for worst case scenario for now */
buffer_increase(res_buff, sizeof(res_hdr) + res_hdr.text_size);
if(!p_query_params->keyword || !*p_query_params->keyword || !strcmp(p_query_params->keyword, " ")){
/* NOTE: relying on items[i]->cbi->num_lines to get number of log lines
* might not be 100% correct, since parsing must have taken place
* already to return correct count. Maybe an issue under heavy load. */
res_hdr.matches = items[i].cbi->num_lines;
memcpy(&res_buff->buffer[res_buff->len + sizeof(res_hdr)], items[i].cbi->data, res_hdr.text_size);
}
else {
res_hdr.matches = search_keyword( items[i].cbi->data, res_hdr.text_size,
&res_buff->buffer[res_buff->len + sizeof(res_hdr)],
&res_hdr.text_size, p_query_params->keyword, NULL,
p_query_params->ignore_case);
m_assert( (res_hdr.matches > 0 && res_hdr.text_size > 0) ||
(res_hdr.matches == 0 && res_hdr.text_size == 0),
"res_hdr.matches and res_hdr.text_size must both be > 0 or == 0.");
if(unlikely(res_hdr.matches < 0))
break; /* res_hdr.matches < 0 - error during keyword search */
}
if(res_hdr.text_size){
res_buff->buffer[res_buff->len + sizeof(res_hdr) + res_hdr.text_size - 1] = '\n'; // replace '\0' with '\n'
memcpy(&res_buff->buffer[res_buff->len], &res_hdr, sizeof(res_hdr));
res_buff->len += sizeof(res_hdr) + res_hdr.text_size;
p_query_params->num_lines += res_hdr.matches;
}
m_assert(TEST_MS_TIMESTAMP_VALID(res_hdr.timestamp), "res_hdr.timestamp is invalid");
}
}
for(int pfi_off = 0; p_file_infos[pfi_off]; pfi_off++)
uv_rwlock_rdunlock(&p_file_infos[pfi_off]->circ_buff->buff_realloc_rwlock);
}
/**
* @brief Query circular buffer if there is space for item insertion.
* @param buff Circular buffer to query for available space.
* @param requested_text_space Size of raw (uncompressed) space needed.
* @note If buff->allow_dropped_logs is 0, then this function will block and
* it will only return once there is available space as requested. In this
* case, it will never return 0.
* @return \p requested_text_space if there is enough space, else 0.
*/
size_t circ_buff_prepare_write(Circ_buff_t *const buff, size_t const requested_text_space){
/* Calculate how much is the maximum compressed space that will
* be required on top of the requested space for the raw data. */
buff->in->text_compressed_size = (size_t) LZ4_compressBound(requested_text_space);
m_assert(buff->in->text_compressed_size != 0, "requested text compressed space is zero");
size_t const required_space = requested_text_space + buff->in->text_compressed_size;
size_t available_text_space = 0;
size_t total_cached_mem_ex_in;
try_to_acquire_space:
total_cached_mem_ex_in = 0;
for (int i = 0; i < buff->num_of_items; i++){
total_cached_mem_ex_in += buff->items[i].data_max_size;
}
/* If the required space is more than the allocated space of the input
* buffer, then we need to check if the input buffer can be reallocated:
*
* a) If the total memory consumption of the circular buffer plus the
* required space is less than the limit set by "circular buffer max size"
* for this log source, then the input buffer can be reallocated.
*
* b) If the total memory consumption of the circular buffer plus the
* required space is more than the limit set by "circular buffer max size"
* for this log source, we will attempt to reclaim some of the circular
* buffer allocated memory from any empty items.
*
* c) If after reclaiming the total memory consumption is still beyond the
* configuration limit, either 0 will be returned as the available space
* for raw logs in the input buffer, or the function will block and repeat
* the same process, until there is available space to be returned, depending
* of the configuration value of buff->allow_dropped_logs.
* */
if(required_space > buff->in->data_max_size) {
if(likely(total_cached_mem_ex_in + required_space <= buff->total_cached_mem_max)){
buff->in->data_max_size = required_space;
buff->in->data = reallocz(buff->in->data, buff->in->data_max_size);
available_text_space = requested_text_space;
}
else if(likely(__atomic_load_n(&buff->full, __ATOMIC_SEQ_CST) == 0)){
int head = __atomic_load_n(&buff->head, __ATOMIC_SEQ_CST) % buff->num_of_items;
int tail = __atomic_load_n(&buff->tail, __ATOMIC_SEQ_CST) % buff->num_of_items;
for (int i = (head == tail ? (head + 1) % buff->num_of_items : head);
i != tail; i = (i + 1) % buff->num_of_items) {
m_assert(i <= buff->num_of_items, "i > buff->num_of_items");
buff->items[i].data_max_size = 1;
buff->items[i].data = reallocz(buff->items[i].data, buff->items[i].data_max_size);
}
total_cached_mem_ex_in = 0;
for (int i = 0; i < buff->num_of_items; i++){
total_cached_mem_ex_in += buff->items[i].data_max_size;
}
if(total_cached_mem_ex_in + required_space <= buff->total_cached_mem_max){
buff->in->data_max_size = required_space;
buff->in->data = reallocz(buff->in->data, buff->in->data_max_size);
available_text_space = requested_text_space;
}
else available_text_space = 0;
}
} else available_text_space = requested_text_space;
__atomic_store_n(&buff->total_cached_mem, total_cached_mem_ex_in + buff->in->data_max_size, __ATOMIC_RELAXED);
if(unlikely(!buff->allow_dropped_logs && !available_text_space)){
sleep_usec(CIRC_BUFF_PREP_WR_RETRY_AFTER_MS * USEC_PER_MS);
goto try_to_acquire_space;
}
m_assert(available_text_space || buff->allow_dropped_logs, "!available_text_space == 0 && !buff->allow_dropped_logs");
return available_text_space;
}
/**
* @brief Insert item from temporary input buffer to circular buffer.
* @param buff Circular buffer to insert the item into
* @return 0 in case of success or -1 in case there was an error (e.g. buff
* is out of space).
*/
int circ_buff_insert(Circ_buff_t *const buff){
// TODO: Probably can be changed to __ATOMIC_RELAXED, but ideally a mutex should be used here.
int head = __atomic_load_n(&buff->head, __ATOMIC_SEQ_CST) % buff->num_of_items;
int tail = __atomic_load_n(&buff->tail, __ATOMIC_SEQ_CST) % buff->num_of_items;
int full = __atomic_load_n(&buff->full, __ATOMIC_SEQ_CST);
/* If circular buffer does not have any free items, it will be expanded
* by reallocating the `items` array and adding one more item. */
if (unlikely(( head == tail ) && full )) {
debug_log( "buff out of space! will be expanded.");
uv_rwlock_wrlock(&buff->buff_realloc_rwlock);
Circ_buff_item_t *items_new = callocz(buff->num_of_items + 1, sizeof(Circ_buff_item_t));
for(int i = 0; i < buff->num_of_items; i++){
Circ_buff_item_t *item_old = &buff->items[head++ % buff->num_of_items];
items_new[i] = *item_old;
}
freez(buff->items);
buff->items = items_new;
buff->parse = buff->parse - buff->tail;
head = buff->head = buff->num_of_items++;
buff->tail = buff->read = 0;
buff->full = 0;
__atomic_add_fetch(&buff->buff_realloc_cnt, 1, __ATOMIC_RELAXED);
uv_rwlock_wrunlock(&buff->buff_realloc_rwlock);
}
Circ_buff_item_t *cur_item = &buff->items[head];
char *tmp_data = cur_item->data;
size_t tmp_data_max_size = cur_item->data_max_size;
cur_item->status = buff->in->status;
cur_item->timestamp = buff->in->timestamp;
cur_item->data = buff->in->data;
cur_item->text_size = buff->in->text_size;
cur_item->text_compressed = buff->in->text_compressed;
cur_item->text_compressed_size = buff->in->text_compressed_size;
cur_item->data_max_size = buff->in->data_max_size;
cur_item->num_lines = buff->in->num_lines;
buff->in->status = CIRC_BUFF_ITEM_STATUS_UNPROCESSED;
buff->in->timestamp = 0;
buff->in->data = tmp_data;
buff->in->text_size = 0;
// buff->in->text_compressed = tmp_data;
buff->in->text_compressed_size = 0;
buff->in->data_max_size = tmp_data_max_size;
buff->in->num_lines = 0;
__atomic_add_fetch(&buff->text_size_total, cur_item->text_size, __ATOMIC_SEQ_CST);
if( __atomic_add_fetch(&buff->text_compressed_size_total, cur_item->text_compressed_size, __ATOMIC_SEQ_CST)){
__atomic_store_n(&buff->compression_ratio,
__atomic_load_n(&buff->text_size_total, __ATOMIC_SEQ_CST) /
__atomic_load_n(&buff->text_compressed_size_total, __ATOMIC_SEQ_CST),
__ATOMIC_SEQ_CST);
} else __atomic_store_n( &buff->compression_ratio, 0, __ATOMIC_SEQ_CST);
if(unlikely(__atomic_add_fetch(&buff->head, 1, __ATOMIC_SEQ_CST) % buff->num_of_items ==
__atomic_load_n(&buff->tail, __ATOMIC_SEQ_CST) % buff->num_of_items)){
__atomic_store_n(&buff->full, 1, __ATOMIC_SEQ_CST);
}
__atomic_or_fetch(&cur_item->status, CIRC_BUFF_ITEM_STATUS_PARSED | CIRC_BUFF_ITEM_STATUS_STREAMED, __ATOMIC_SEQ_CST);
return 0;
}
/**
* @brief Return pointer to next item to be read from the circular buffer.
* @param buff Circular buffer to get next item from.
* @return Pointer to the next circular buffer item to be read, or NULL
* if there are no more items to be read.
*/
Circ_buff_item_t *circ_buff_read_item(Circ_buff_t *const buff) {
Circ_buff_item_t *item = &buff->items[buff->read % buff->num_of_items];
m_assert(__atomic_load_n(&item->status, __ATOMIC_RELAXED) <= CIRC_BUFF_ITEM_STATUS_DONE, "Invalid status");
if( /* No more records to be retrieved from the buffer - pay attention that
* there is no `% buff->num_of_items` operation, as we need to check
* the case where buff->read is exactly equal to buff->head. */
(buff->read == (__atomic_load_n(&buff->head, __ATOMIC_SEQ_CST))) ||
/* Current item either not parsed or streamed */
(__atomic_load_n(&item->status, __ATOMIC_RELAXED) != CIRC_BUFF_ITEM_STATUS_DONE) ){
return NULL;
}
__atomic_sub_fetch(&buff->text_size_total, item->text_size, __ATOMIC_SEQ_CST);
if( __atomic_sub_fetch(&buff->text_compressed_size_total, item->text_compressed_size, __ATOMIC_SEQ_CST)){
__atomic_store_n(&buff->compression_ratio,
__atomic_load_n(&buff->text_size_total, __ATOMIC_SEQ_CST) /
__atomic_load_n(&buff->text_compressed_size_total, __ATOMIC_SEQ_CST),
__ATOMIC_SEQ_CST);
} else __atomic_store_n( &buff->compression_ratio, 0, __ATOMIC_SEQ_CST);
buff->read++;
return item;
}
/**
* @brief Complete buffer read process.
* @param buff Circular buffer to complete read process on.
*/
void circ_buff_read_done(Circ_buff_t *const buff){
/* Even if one item was read, it means buffer cannot be full anymore */
if(__atomic_load_n(&buff->tail, __ATOMIC_RELAXED) != buff->read)
__atomic_store_n(&buff->full, 0, __ATOMIC_SEQ_CST);
__atomic_store_n(&buff->tail, buff->read, __ATOMIC_SEQ_CST);
}
/**
* @brief Create a new circular buffer.
* @param num_of_items Number of Circ_buff_item_t items in the buffer.
* @param max_size Maximum memory the circular buffer can occupy.
* @param allow_dropped_logs Maximum memory the circular buffer can occupy.
* @return Pointer to the new circular buffer structure.
*/
Circ_buff_t *circ_buff_init(const int num_of_items,
const size_t max_size,
const int allow_dropped_logs ) {
Circ_buff_t *buff = callocz(1, sizeof(Circ_buff_t));
buff->num_of_items = num_of_items;
buff->items = callocz(buff->num_of_items, sizeof(Circ_buff_item_t));
buff->in = callocz(1, sizeof(Circ_buff_item_t));
uv_rwlock_init(&buff->buff_realloc_rwlock);
buff->total_cached_mem_max = max_size;
buff->allow_dropped_logs = allow_dropped_logs;
return buff;
}
/**
* @brief Destroy a circular buffer.
* @param buff Circular buffer to be destroyed.
*/
void circ_buff_destroy(Circ_buff_t *buff){
for (int i = 0; i < buff->num_of_items; i++) freez(buff->items[i].data);
freez(buff->items);
freez(buff->in->data);
freez(buff->in);
freez(buff);
};

View File

@ -0,0 +1,66 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file circular_buffer.h
* @brief Header of circular_buffer.c
*/
#ifndef CIRCULAR_BUFFER_H_
#define CIRCULAR_BUFFER_H_
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <uv.h>
#include "defaults.h"
#include "query.h"
#include "file_info.h"
// Forward declaration to break circular dependency
struct File_info;
typedef enum {
CIRC_BUFF_ITEM_STATUS_UNPROCESSED = 0,
CIRC_BUFF_ITEM_STATUS_PARSED = 1,
CIRC_BUFF_ITEM_STATUS_STREAMED = 2,
CIRC_BUFF_ITEM_STATUS_DONE = 3 // == CIRC_BUFF_ITEM_STATUS_PARSED | CIRC_BUFF_ITEM_STATUS_STREAMED
} circ_buff_item_status_t;
typedef struct Circ_buff_item {
circ_buff_item_status_t status; /**< Denotes if item is unprocessed, in processing or processed **/
msec_t timestamp; /**< Epoch datetime of when data was collected **/
char *data; /**< Base of buffer to store both uncompressed and compressed logs **/
size_t text_size; /**< Size of uncompressed logs **/
char *text_compressed; /**< Pointer offset within *data that points to start of compressed logs **/
size_t text_compressed_size; /**< Size of compressed logs **/
size_t data_max_size; /**< Allocated size of *data **/
unsigned long num_lines; /**< Number of log records in item */
} Circ_buff_item_t;
typedef struct Circ_buff {
int num_of_items; /**< Number of preallocated items in the buffer **/
Circ_buff_item_t *items; /**< Array of all circular buffer items **/
Circ_buff_item_t *in; /**< Circular buffer item to write new data into **/
int head; /**< Position of next item insertion **/
int read; /**< Index between tail and head, used to read items out of Circ_buff **/
int tail; /**< Last valid item in Circ_buff **/
int parse; /**< Points to next item in buffer to be parsed **/
int full; /**< When head == tail, this indicates if buffer is full or empty **/
uv_rwlock_t buff_realloc_rwlock; /**< RW lock to lock buffer operations when reallocating or expanding buffer **/
unsigned int buff_realloc_cnt; /**< Counter of how any buffer reallocations have occurred **/
size_t total_cached_mem; /**< Total memory allocated for Circ_buff (excluding *in) **/
size_t total_cached_mem_max; /**< Maximum allowable size for total_cached_mem **/
int allow_dropped_logs; /**< Boolean to indicate whether logs are allowed to be dropped if buffer is full */
size_t text_size_total; /**< Total size of items[]->text_size **/
size_t text_compressed_size_total; /**< Total size of items[]->text_compressed_size **/
int compression_ratio; /**< text_size_total / text_compressed_size_total **/
} Circ_buff_t;
void circ_buff_search(logs_query_params_t *const p_query_params, struct File_info *const p_file_infos[]);
size_t circ_buff_prepare_write(Circ_buff_t *const buff, size_t const requested_text_space);
int circ_buff_insert(Circ_buff_t *const buff);
Circ_buff_item_t *circ_buff_read_item(Circ_buff_t *const buff);
void circ_buff_read_done(Circ_buff_t *const buff);
Circ_buff_t *circ_buff_init(const int num_of_items, const size_t max_size, const int allow_dropped_logs);
void circ_buff_destroy(Circ_buff_t *buff);
#endif // CIRCULAR_BUFFER_H_

1396
logsmanagement/db_api.c Normal file

File diff suppressed because it is too large Load Diff

22
logsmanagement/db_api.h Normal file
View File

@ -0,0 +1,22 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file db_api.h
* @brief Header of db_api.c
*/
#ifndef DB_API_H_
#define DB_API_H_
#include "../database/sqlite/sqlite3.h"
#include <uv.h>
#include "query.h"
#include "file_info.h"
#define LOGS_MANAG_DB_SUBPATH "/logs_management_db"
int db_user_version(sqlite3 *const db, const int set_user_version);
void db_set_main_dir(char *const dir);
int db_init(void);
void db_search(logs_query_params_t *const p_query_params, struct File_info *const p_file_infos[]);
#endif // DB_API_H_

133
logsmanagement/defaults.h Normal file
View File

@ -0,0 +1,133 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file defaults.h
* @brief Hard-coded configuration settings for the Logs Management engine
*/
#ifndef LOGSMANAG_DEFAULTS_H_
#define LOGSMANAG_DEFAULTS_H_
/* -------------------------------------------------------------------------- */
/* General */
/* -------------------------------------------------------------------------- */
#define KiB * 1024ULL
#define MiB * 1048576ULL
#define GiB * 1073741824ULL
#define MAX_LOG_MSG_SIZE 50 MiB /**< Maximum allowable log message size (in Bytes) to be stored in message queue and DB. **/
#define MAX_CUS_CHARTS_PER_SOURCE 100 /**< Hard limit of maximum custom charts per log source **/
#define MAX_OUTPUTS_PER_SOURCE 100 /**< Hard limit of maximum Fluent Bit outputs per log source **/
#define UPDATE_TIMEOUT_DEFAULT 10 /**< Default timeout to use to update charts if they haven't been updated in the meantime. **/
#if !defined(LOGS_MANAGEMENT_STRESS_TEST)
#define ENABLE_COLLECTED_LOGS_TOTAL_DEFAULT CONFIG_BOOLEAN_NO /**< Default value to enable (or not) metrics of total collected log records **/
#else
#define ENABLE_COLLECTED_LOGS_TOTAL_DEFAULT CONFIG_BOOLEAN_YES /**< Default value to enable (or not) metrics of total collected log records, if stress tests are enabled **/
#endif
#define ENABLE_COLLECTED_LOGS_RATE_DEFAULT CONFIG_BOOLEAN_YES /**< Default value to enable (or not) metrics of rate of collected log records **/
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Database */
/* -------------------------------------------------------------------------- */
typedef enum {
LOGS_MANAG_DB_MODE_FULL = 0,
LOGS_MANAG_DB_MODE_NONE
} logs_manag_db_mode_t;
#define SAVE_BLOB_TO_DB_DEFAULT 6 /**< Global default configuration interval to save buffers from RAM to disk **/
#define SAVE_BLOB_TO_DB_MIN 2 /**< Minimum allowed interval to save buffers from RAM to disk **/
#define SAVE_BLOB_TO_DB_MAX 1800 /**< Maximum allowed interval to save buffers from RAM to disk **/
#define BLOB_MAX_FILES 10 /**< Maximum allowed number of BLOB files (per collection) that are used to store compressed logs. When exceeded, the olderst one will be overwritten. **/
#define DISK_SPACE_LIMIT_DEFAULT 500 /**< Global default configuration maximum database disk space limit per log source **/
#if !defined(LOGS_MANAGEMENT_STRESS_TEST)
#define GLOBAL_DB_MODE_DEFAULT_STR "none" /**< db mode string to be used as global default in configuration **/
#define GLOBAL_DB_MODE_DEFAULT LOGS_MANAG_DB_MODE_NONE /**< db mode to be used as global default, matching GLOBAL_DB_MODE_DEFAULT_STR **/
#else
#define GLOBAL_DB_MODE_DEFAULT_STR "full" /**< db mode string to be used as global default in configuration, if stress tests are enabled **/
#define GLOBAL_DB_MODE_DEFAULT LOGS_MANAG_DB_MODE_FULL /**< db mode to be used as global default, matching GLOBAL_DB_MODE_DEFAULT_STR, if stress tests are enabled **/
#endif
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Circular Buffer */
/* -------------------------------------------------------------------------- */
#define CIRCULAR_BUFF_SPARE_ITEMS_DEFAULT 2 /**< Additional circular buffers items to give time to the db engine to save buffers to disk **/
#define CIRCULAR_BUFF_DEFAULT_MAX_SIZE (64 MiB) /**< Default circular_buffer_max_size **/
#define CIRCULAR_BUFF_MAX_SIZE_RANGE_MIN (1 MiB) /**< circular_buffer_max_size read from configuration cannot be smaller than this **/
#define CIRCULAR_BUFF_MAX_SIZE_RANGE_MAX (4 GiB) /**< circular_buffer_max_size read from configuration cannot be larger than this **/
#define CIRCULAR_BUFF_DEFAULT_DROP_LOGS 0 /**< Global default configuration value whether to drop logs if circular buffer is full **/
#define CIRC_BUFF_PREP_WR_RETRY_AFTER_MS 1000 /**< If circ_buff_prepare_write() fails due to not enough space, how many millisecs to wait before retrying **/
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Compression */
/* -------------------------------------------------------------------------- */
#define COMPRESSION_ACCELERATION_DEFAULT 1 /**< Global default value for compression acceleration **/
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Kernel logs (kmsg) plugin */
/* -------------------------------------------------------------------------- */
#define KERNEL_LOGS_COLLECT_INIT_WAIT 5 /**< Wait time (in sec) before kernel log collection starts. Required in order to skip collection and processing of pre-existing logs at Netdata boot. **/
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Fluent Bit */
/* -------------------------------------------------------------------------- */
#define FLB_FLUSH_DEFAULT "0.1" /**< Default Fluent Bit flush interval **/
#define FLB_HTTP_LISTEN_DEFAULT "0.0.0.0" /**< Default Fluent Bit server listening socket **/
#define FLB_HTTP_PORT_DEFAULT "2020" /**< Default Fluent Bit server listening port **/
#define FLB_HTTP_SERVER_DEFAULT "false" /**< Default Fluent Bit server enable status **/
#define FLB_LOG_FILENAME_DEFAULT "fluentbit.log" /**< Default Fluent Bit log filename **/
#define FLB_LOG_LEVEL_DEFAULT "info" /**< Default Fluent Bit log level **/
#define FLB_CORO_STACK_SIZE_DEFAULT "24576" /**< Default Fluent Bit coro stack size - do not change this value unless there is a good reason **/
#define FLB_FORWARD_UNIX_PATH_DEFAULT "" /**< Default path for Forward unix socket configuration, see also https://docs.fluentbit.io/manual/pipeline/inputs/forward#configuration-parameters **/
#define FLB_FORWARD_UNIX_PERM_DEFAULT "0644" /**< Default permissions for Forward unix socket configuration, see also https://docs.fluentbit.io/manual/pipeline/inputs/forward#configuration-parameters **/
#define FLB_FORWARD_ADDR_DEFAULT "0.0.0.0" /**< Default listen address for Forward socket configuration, see also https://docs.fluentbit.io/manual/pipeline/inputs/forward#configuration-parameters **/
#define FLB_FORWARD_PORT_DEFAULT "24224" /**< Default listen port for Forward socket configuration, see also https://docs.fluentbit.io/manual/pipeline/inputs/forward#configuration-parameters **/
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Queries */
/* -------------------------------------------------------------------------- */
#define LOGS_MANAG_MAX_COMPOUND_QUERY_SOURCES 10U /**< Maximum allowed number of log sources that can be searched in a single query **/
#define LOGS_MANAG_QUERY_QUOTA_DEFAULT (10 MiB) /**< Default logs management query quota **/
#define LOGS_MANAG_QUERY_QUOTA_MAX MAX_LOG_MSG_SIZE /**< Max logs management query quota **/
#define LOGS_MANAG_QUERY_IGNORE_CASE_DEFAULT 0 /**< Boolean to indicate whether to ignore case for keyword or not **/
#define LOGS_MANAG_QUERY_SANITIZE_KEYWORD_DEFAULT 0 /**< Boolean to indicate whether to sanitize keyword or not **/
#define LOGS_MANAG_QUERY_TIMEOUT_DEFAULT 30 /**< Default timeout of logs management queries (in secs) **/
/* -------------------------------------------------------------------------- */
#endif // LOGSMANAG_DEFAULTS_H_

161
logsmanagement/file_info.h Normal file
View File

@ -0,0 +1,161 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file file_info.h
* @brief Includes the File_info structure that is the primary
* structure for configuring each log source.
*/
#ifndef FILE_INFO_H_
#define FILE_INFO_H_
#include <uv.h>
#include "../database/sqlite/sqlite3.h"
#include "defaults.h"
#include "parser.h"
// Cool trick --> http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/c_preprocessor_applications_en
/* WARNING: DO NOT CHANGED THE ORDER OF LOG_SRC_TYPES, ONLY APPEND NEW TYPES */
#define LOG_SRC_TYPES LST(FLB_TAIL)LST(FLB_WEB_LOG)LST(FLB_KMSG) \
LST(FLB_SYSTEMD)LST(FLB_DOCKER_EV)LST(FLB_SYSLOG) \
LST(FLB_SERIAL)LST(FLB_MQTT)
#define LST(x) x,
enum log_src_type_t {LOG_SRC_TYPES};
#undef LST
#define LST(x) #x,
static const char * const log_src_type_t_str[] = {LOG_SRC_TYPES};
#undef LST
#define LOG_SRCS LST(LOG_SOURCE_LOCAL)LST(LOG_SOURCE_FORWARD)
#define LST(x) x,
enum log_src_t {LOG_SRCS};
#undef LST
#define LST(x) #x,
static const char * const log_src_t_str[] = {LOG_SRCS};
#undef LST
#include "rrd_api/rrd_api.h"
typedef enum log_src_state {
LOG_SRC_UNINITIALIZED = 0, /*!< config not initialized */
LOG_SRC_READY, /*!< config initialized (monitoring may have started or not) */
LOG_SRC_EXITING /*!< cleanup and destroy stage */
} LOG_SRC_STATE;
typedef struct flb_tail_config {
int use_inotify;
} Flb_tail_config_t;
typedef struct flb_serial_config {
char *bitrate;
char *min_bytes;
char *separator;
char *format;
} Flb_serial_config_t;
typedef struct flb_socket_config {
char *mode;
char *unix_path;
char *unix_perm;
char *listen;
char *port;
} Flb_socket_config_t;
typedef struct syslog_parser_config {
char *log_format;
Flb_socket_config_t *socket_config;
} Syslog_parser_config_t;
typedef struct flb_output_config {
char *plugin; /**< Fluent Bit output plugin name, see: https://docs.fluentbit.io/manual/pipeline/outputs **/
int id; /**< Incremental id of plugin configuration in linked list, starting from 1 **/
struct flb_output_config_param {
char *key; /**< Key of the parameter configuration **/
char *val; /**< Value of the parameter configuration **/
struct flb_output_config_param *next; /**< Next output parameter configuration in the linked list of parameters **/
} *param;
struct flb_output_config *next; /**< Next output plugin configuration in the linked list of output plugins **/
} Flb_output_config_t;
struct File_info {
/* TODO: Struct needs refactoring, as a lot of members take up memory that
* is not used, depending on the type of the log source. */
/* Struct members core to any log source type */
const char *chartname; /**< Top level chart name for this log source on web dashboard **/
char *filename; /**< Full path of log source **/
const char *file_basename; /**< Basename of log source **/
const char *stream_guid; /**< Streaming input GUID **/
enum log_src_t log_source; /**< Defines log source origin - see enum log_src_t for options **/
enum log_src_type_t log_type; /**< Defines type of log source - see enum log_src_type_t for options **/
struct Circ_buff *circ_buff; /**< Associated circular buffer - only one should exist per log source. **/
int compression_accel; /**< LZ4 compression acceleration factor for collected logs, see also: https://github.com/lz4/lz4/blob/90d68e37093d815e7ea06b0ee3c168cccffc84b8/lib/lz4.h#L195 **/
int update_every; /**< Interval (in sec) of how often to collect and update charts **/
int update_timeout; /**< Timeout to update charts after, since last update */
int use_log_timestamp; /**< Use log timestamps instead of collection timestamps, if available **/
struct Chart_meta *chart_meta;
LOG_SRC_STATE state; /**< State of log source, used to sync status among threads **/
/* Struct members related to disk database */
sqlite3 *db; /**< SQLite3 DB connection to DB that contains metadata for this log source **/
const char *db_dir; /**< Path to metadata DB and compressed log BLOBs directory **/
const char *db_metadata; /**< Path to metadata DB file **/
uv_mutex_t *db_mut; /**< DB access mutex **/
uv_thread_t *db_writer_thread; /**< Thread responsible for handling the DB writes **/
uv_file blob_handles[BLOB_MAX_FILES + 1]; /**< File handles for BLOB files. Item 0 not used - just for matching 1-1 with DB ids **/
logs_manag_db_mode_t db_mode; /**< DB mode as enum. **/
int blob_write_handle_offset; /**< File offset denoting HEAD of currently open database BLOB file **/
int buff_flush_to_db_interval; /**< Frequency at which RAM buffers of this log source will be flushed to the database **/
int64_t blob_max_size; /**< When the size of a BLOB exceeds this value, the BLOB gets rotated. **/
int64_t blob_total_size; /**< This is the total disk space that all BLOBs occupy (for this log source) **/
int64_t db_write_duration; /**< Holds timing details related to duration of DB write operations **/
int64_t db_rotate_duration; /**< Holds timing details related to duration of DB rorate operations **/
sqlite3_stmt *stmt_get_log_msg_metadata_asc; /**< SQLITE3 statement used to retrieve metadata from database during queries in ascending order **/
sqlite3_stmt *stmt_get_log_msg_metadata_desc; /**< SQLITE3 statement used to retrieve metadata from database during queries in descending order **/
/* Struct members related to queries */
struct {
usec_t user;
usec_t sys;
} cpu_time_per_mib;
/* Struct members related to log parsing */
Log_parser_config_t *parser_config; /**< Configuration to be user by log parser - read from logsmanagement.conf **/
Log_parser_cus_config_t **parser_cus_config; /**< Array of custom log parsing configurations **/
Log_parser_metrics_t *parser_metrics; /**< Extracted metrics **/
/* Struct members related to Fluent-Bit inputs, filters, buffers, outputs */
int flb_input; /**< Fluent-bit input interface property for this log source **/
int flb_parser; /**< Fluent-bit parser interface property for this log source **/
int flb_lib_output; /**< Fluent-bit "lib" output interface property for this log source **/
void *flb_config; /**< Any other Fluent-Bit configuration specific to this log source only **/
uv_mutex_t flb_tmp_buff_mut;
uv_timer_t flb_tmp_buff_cpy_timer;
Flb_output_config_t *flb_outputs; /**< Linked list of Fluent Bit outputs for this log source **/
};
struct File_infos_arr {
struct File_info **data;
uint8_t count; /**< Number of items in array **/
};
extern struct File_infos_arr *p_file_infos_arr; /**< Array that contains all p_file_info structs for all log sources **/
typedef struct {
int update_every;
int update_timeout;
int use_log_timestamp;
int circ_buff_max_size_in_mib;
int circ_buff_drop_logs;
int compression_acceleration;
logs_manag_db_mode_t db_mode;
int disk_space_limit_in_mib;
int buff_flush_to_db_interval;
int enable_collected_logs_total;
int enable_collected_logs_rate;
} g_logs_manag_config_t;
extern g_logs_manag_config_t g_logs_manag_config;
#endif // FILE_INFO_H_

1435
logsmanagement/flb_plugin.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file flb_plugin.h
* @brief Header of flb_plugin.c
*/
#ifndef FLB_PLUGIN_H_
#define FLB_PLUGIN_H_
#include "file_info.h"
#include <uv.h>
#define LOG_PATH_AUTO "auto"
#define KMSG_DEFAULT_PATH "/dev/kmsg"
#define SYSTEMD_DEFAULT_PATH "SD_JOURNAL_LOCAL_ONLY"
#define DOCKER_EV_DEFAULT_PATH "/var/run/docker.sock"
typedef struct {
char *flush,
*http_listen, *http_port, *http_server,
*log_path, *log_level,
*coro_stack_size;
} flb_srvc_config_t ;
int flb_init(flb_srvc_config_t flb_srvc_config, const char *const stock_config_dir);
int flb_run(void);
void flb_terminate(void);
void flb_complete_item_timer_timeout_cb(uv_timer_t *handle);
int flb_add_input(struct File_info *const p_file_info);
int flb_add_fwd_input(Flb_socket_config_t *const forward_in_config);
void flb_free_fwd_input_out_cb(void);
#endif // FLB_PLUGIN_H_

View File

@ -0,0 +1,19 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index ae853815b..8b81a052f 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -70,12 +70,14 @@ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FLB_FILENAME__=__FILE__")
if(${CMAKE_SYSTEM_PROCESSOR} MATCHES "armv7l")
set(CMAKE_C_LINK_FLAGS "${CMAKE_C_LINK_FLAGS} -latomic")
set(CMAKE_CXX_LINK_FLAGS "${CMAKE_CXX_LINK_FLAGS} -latomic")
+ set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -latomic")
endif()
if(${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD")
set(FLB_SYSTEM_FREEBSD On)
add_definitions(-DFLB_SYSTEM_FREEBSD)
set(CMAKE_C_LINK_FLAGS "${CMAKE_C_LINK_FLAGS} -lutil")
set(CMAKE_CXX_LINK_FLAGS "${CMAKE_CXX_LINK_FLAGS} -lutil")
+ set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -lutil")
endif()
# *BSD is not supported platform for wasm-micro-runtime except for FreeBSD.

View File

@ -0,0 +1,10 @@
--- a/lib/chunkio/src/CMakeLists.txt
+++ b/lib/chunkio/src/CMakeLists.txt
@@ -14,6 +14,7 @@
)
set(libs cio-crc32)
+set(libs ${libs} fts)
if(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
set(src

View File

@ -0,0 +1,178 @@
set(FLB_ALL OFF CACHE BOOL "Enable all features")
set(FLB_DEBUG OFF CACHE BOOL "Build with debug mode (-g)")
set(FLB_RELEASE OFF CACHE BOOL "Build with release mode (-O2 -g -DNDEBUG)")
# set(FLB_IPO "ReleaseOnly" CACHE STRING "Build with interprocedural optimization")
# set_property(CACHE FLB_IPO PROPERTY STRINGS "On;Off;ReleaseOnly")
set(FLB_SMALL OFF CACHE BOOL "Optimise for small size")
set(FLB_COVERAGE OFF CACHE BOOL "Build with code-coverage")
set(FLB_JEMALLOC OFF CACHE BOOL "Build with Jemalloc support")
set(FLB_REGEX ON CACHE BOOL "Build with Regex support")
set(FLB_UTF8_ENCODER ON CACHE BOOL "Build with UTF8 encoding support")
set(FLB_PARSER ON CACHE BOOL "Build with Parser support")
set(FLB_TLS ON CACHE BOOL "Build with SSL/TLS support")
set(FLB_BINARY OFF CACHE BOOL "Build executable binary")
set(FLB_EXAMPLES OFF CACHE BOOL "Build examples")
set(FLB_SHARED_LIB ON CACHE BOOL "Build shared library")
set(FLB_VALGRIND OFF CACHE BOOL "Enable Valgrind support")
set(FLB_TRACE OFF CACHE BOOL "Enable trace mode")
set(FLB_CHUNK_TRACE OFF CACHE BOOL "Enable chunk traces")
set(FLB_TESTS_RUNTIME OFF CACHE BOOL "Enable runtime tests")
set(FLB_TESTS_INTERNAL OFF CACHE BOOL "Enable internal tests")
set(FLB_TESTS_INTERNAL_FUZZ OFF CACHE BOOL "Enable internal fuzz tests")
set(FLB_TESTS_OSSFUZZ OFF CACHE BOOL "Enable OSS-Fuzz build")
set(FLB_MTRACE OFF CACHE BOOL "Enable mtrace support")
set(FLB_POSIX_TLS OFF CACHE BOOL "Force POSIX thread storage")
set(FLB_INOTIFY ON CACHE BOOL "Enable inotify support")
set(FLB_SQLDB ON CACHE BOOL "Enable SQL embedded DB")
set(FLB_HTTP_SERVER ON CACHE BOOL "Enable HTTP Server")
set(FLB_BACKTRACE OFF CACHE BOOL "Enable stacktrace support")
set(FLB_LUAJIT OFF CACHE BOOL "Enable Lua Scripting support")
set(FLB_RECORD_ACCESSOR ON CACHE BOOL "Enable record accessor")
set(FLB_SIGNV4 ON CACHE BOOL "Enable AWS Signv4 support")
set(FLB_AWS ON CACHE BOOL "Enable AWS support")
# set(FLB_STATIC_CONF "Build binary using static configuration")
set(FLB_STREAM_PROCESSOR OFF CACHE BOOL "Enable Stream Processor")
set(FLB_CORO_STACK_SIZE 24576 CACHE STRING "Set coroutine stack size")
set(FLB_AVRO_ENCODER OFF CACHE BOOL "Build with Avro encoding support")
set(FLB_AWS_ERROR_REPORTER ON CACHE BOOL "Build with aws error reporting support")
set(FLB_ARROW OFF CACHE BOOL "Build with Apache Arrow support")
set(FLB_WINDOWS_DEFAULTS OFF CACHE BOOL "Build with predefined Windows settings")
set(FLB_WASM OFF CACHE BOOL "Build with WASM runtime support")
set(FLB_WAMRC OFF CACHE BOOL "Build with WASM AOT compiler executable")
set(FLB_WASM_STACK_PROTECT OFF CACHE BOOL "Build with WASM runtime with strong stack protector flags")
# Native Metrics Support (cmetrics)
set(FLB_METRICS OFF CACHE BOOL "Enable metrics support")
# Proxy Plugins
set(FLB_PROXY_GO OFF CACHE BOOL "Enable Go plugins support")
# Built-in Custom Plugins
set(FLB_CUSTOM_CALYPTIA OFF CACHE BOOL "Enable Calyptia Support")
# Config formats
set(FLB_CONFIG_YAML OFF CACHE BOOL "Enable YAML config format")
# Built-in Plugins
set(FLB_IN_CPU OFF CACHE BOOL "Enable CPU input plugin")
set(FLB_IN_THERMAL OFF CACHE BOOL "Enable Thermal plugin")
set(FLB_IN_DISK OFF CACHE BOOL "Enable Disk input plugin")
set(FLB_IN_DOCKER OFF CACHE BOOL "Enable Docker input plugin")
set(FLB_IN_DOCKER_EVENTS ON CACHE BOOL "Enable Docker events input plugin")
set(FLB_IN_EXEC OFF CACHE BOOL "Enable Exec input plugin")
set(FLB_IN_EXEC_WASI OFF CACHE BOOL "Enable Exec WASI input plugin")
set(FLB_IN_EVENT_TEST OFF CACHE BOOL "Enable Events test plugin")
set(FLB_IN_EVENT_TYPE OFF CACHE BOOL "Enable event type plugin")
set(FLB_IN_FLUENTBIT_METRICS OFF CACHE BOOL "Enable Fluent Bit metrics plugin")
set(FLB_IN_FORWARD ON CACHE BOOL "Enable Forward input plugin")
set(FLB_IN_HEALTH OFF CACHE BOOL "Enable Health input plugin")
set(FLB_IN_HTTP OFF CACHE BOOL "Enable HTTP input plugin")
set(FLB_IN_MEM OFF CACHE BOOL "Enable Memory input plugin")
set(FLB_IN_KUBERNETES_EVENTS OFF CACHE BOOL "Enable Kubernetes Events plugin")
set(FLB_IN_KAFKA OFF CACHE BOOL "Enable Kafka input plugin")
set(FLB_IN_KMSG ON CACHE BOOL "Enable Kernel log input plugin")
set(FLB_IN_LIB ON CACHE BOOL "Enable library mode input plugin")
set(FLB_IN_RANDOM OFF CACHE BOOL "Enable random input plugin")
set(FLB_IN_SERIAL ON CACHE BOOL "Enable Serial input plugin")
set(FLB_IN_STDIN OFF CACHE BOOL "Enable Standard input plugin")
set(FLB_IN_SYSLOG ON CACHE BOOL "Enable Syslog input plugin")
set(FLB_IN_TAIL ON CACHE BOOL "Enable Tail input plugin")
set(FLB_IN_UDP OFF CACHE BOOL "Enable UDP input plugin")
set(FLB_IN_TCP OFF CACHE BOOL "Enable TCP input plugin")
set(FLB_IN_UNIX_SOCKET OFF CACHE BOOL "Enable Unix socket input plugin")
set(FLB_IN_MQTT ON CACHE BOOL "Enable MQTT Broker input plugin")
set(FLB_IN_HEAD OFF CACHE BOOL "Enable Head input plugin")
set(FLB_IN_PROC OFF CACHE BOOL "Enable Process input plugin")
set(FLB_IN_SYSTEMD ON CACHE BOOL "Enable Systemd input plugin")
set(FLB_IN_DUMMY OFF CACHE BOOL "Enable Dummy input plugin")
set(FLB_IN_NGINX_EXPORTER_METRICS OFF CACHE BOOL "Enable Nginx Metrics input plugin")
set(FLB_IN_NETIF OFF CACHE BOOL "Enable NetworkIF input plugin")
set(FLB_IN_WINLOG OFF CACHE BOOL "Enable Windows Log input plugin")
set(FLB_IN_WINSTAT OFF CACHE BOOL "Enable Windows Stat input plugin")
set(FLB_IN_WINEVTLOG OFF CACHE BOOL "Enable Windows EvtLog input plugin")
set(FLB_IN_COLLECTD OFF CACHE BOOL "Enable Collectd input plugin")
set(FLB_IN_PROMETHEUS_SCRAPE OFF CACHE BOOL "Enable Promeheus Scrape input plugin")
set(FLB_IN_STATSD OFF CACHE BOOL "Enable StatsD input plugin")
set(FLB_IN_EVENT_TEST OFF CACHE BOOL "Enable event test plugin")
set(FLB_IN_STORAGE_BACKLOG OFF CACHE BOOL "Enable storage backlog input plugin")
set(FLB_IN_EMITTER OFF CACHE BOOL "Enable emitter input plugin")
set(FLB_IN_NODE_EXPORTER_METRICS OFF CACHE BOOL "Enable node exporter metrics input plugin")
set(FLB_IN_WINDOWS_EXPORTER_METRICS OFF CACHE BOOL "Enable windows exporter metrics input plugin")
set(FLB_IN_PODMAN_METRICS OFF CACHE BOOL "Enable Podman Metrics input plugin")
set(FLB_IN_OPENTELEMETRY OFF CACHE BOOL "Enable OpenTelemetry input plugin")
set(FLB_IN_ELASTICSEARCH OFF CACHE BOOL "Enable Elasticsearch (Bulk API) input plugin")
set(FLB_IN_CALYPTIA_FLEET OFF CACHE BOOL "Enable Calyptia Fleet input plugin")
set(FLB_IN_SPLUNK OFF CACHE BOOL "Enable Splunk HTTP HEC input plugin")
set(FLB_OUT_AZURE ON CACHE BOOL "Enable Azure output plugin")
set(FLB_OUT_AZURE_BLOB ON CACHE BOOL "Enable Azure output plugin")
set(FLB_OUT_AZURE_LOGS_INGESTION ON CACHE BOOL "Enable Azure Logs Ingestion output plugin")
set(FLB_OUT_AZURE_KUSTO ON CACHE BOOL "Enable Azure Kusto output plugin")
set(FLB_OUT_BIGQUERY ON CACHE BOOL "Enable BigQuery output plugin")
set(FLB_OUT_CALYPTIA OFF CACHE BOOL "Enable Calyptia monitoring plugin")
set(FLB_OUT_COUNTER OFF CACHE BOOL "Enable Counter output plugin")
set(FLB_OUT_DATADOG ON CACHE BOOL "Enable DataDog output plugin")
set(FLB_OUT_ES ON CACHE BOOL "Enable Elasticsearch output plugin")
set(FLB_OUT_EXIT OFF CACHE BOOL "Enable Exit output plugin")
set(FLB_OUT_FORWARD ON CACHE BOOL "Enable Forward output plugin")
set(FLB_OUT_GELF ON CACHE BOOL "Enable GELF output plugin")
set(FLB_OUT_HTTP ON CACHE BOOL "Enable HTTP output plugin")
set(FLB_OUT_INFLUXDB ON CACHE BOOL "Enable InfluxDB output plugin")
set(FLB_OUT_NATS ON CACHE BOOL "Enable NATS output plugin")
set(FLB_OUT_NRLOGS ON CACHE BOOL "Enable New Relic output plugin")
set(FLB_OUT_OPENSEARCH ON CACHE BOOL "Enable OpenSearch output plugin")
set(FLB_OUT_TCP ON CACHE BOOL "Enable TCP output plugin")
set(FLB_OUT_UDP ON CACHE BOOL "Enable UDP output plugin")
set(FLB_OUT_PLOT ON CACHE BOOL "Enable Plot output plugin")
set(FLB_OUT_FILE ON CACHE BOOL "Enable file output plugin")
set(FLB_OUT_TD ON CACHE BOOL "Enable Treasure Data output plugin")
set(FLB_OUT_RETRY OFF CACHE BOOL "Enable Retry test output plugin")
set(FLB_OUT_PGSQL ON CACHE BOOL "Enable PostgreSQL output plugin")
set(FLB_OUT_SKYWALKING ON CACHE BOOL "Enable Apache SkyWalking output plugin")
set(FLB_OUT_SLACK ON CACHE BOOL "Enable Slack output plugin")
set(FLB_OUT_SPLUNK ON CACHE BOOL "Enable Splunk output plugin")
set(FLB_OUT_STACKDRIVER ON CACHE BOOL "Enable Stackdriver output plugin")
set(FLB_OUT_STDOUT OFF CACHE BOOL "Enable STDOUT output plugin")
set(FLB_OUT_SYSLOG ON CACHE BOOL "Enable Syslog output plugin")
set(FLB_OUT_LIB ON CACHE BOOL "Enable library mode output plugin")
set(FLB_OUT_NULL OFF CACHE BOOL "Enable dev null output plugin")
set(FLB_OUT_FLOWCOUNTER ON CACHE BOOL "Enable flowcount output plugin")
set(FLB_OUT_LOGDNA ON CACHE BOOL "Enable LogDNA output plugin")
set(FLB_OUT_LOKI ON CACHE BOOL "Enable Loki output plugin")
set(FLB_OUT_KAFKA ON CACHE BOOL "Enable Kafka output plugin")
set(FLB_OUT_KAFKA_REST ON CACHE BOOL "Enable Kafka Rest output plugin")
set(FLB_OUT_CLOUDWATCH_LOGS ON CACHE BOOL "Enable AWS CloudWatch output plugin")
set(FLB_OUT_KINESIS_FIREHOSE ON CACHE BOOL "Enable AWS Firehose output plugin")
set(FLB_OUT_KINESIS_STREAMS ON CACHE BOOL "Enable AWS Kinesis output plugin")
set(FLB_OUT_OPENTELEMETRY ON CACHE BOOL "Enable OpenTelemetry plugin")
set(FLB_OUT_PROMETHEUS_EXPORTER ON CACHE BOOL "Enable Prometheus exporter plugin")
set(FLB_OUT_PROMETHEUS_REMOTE_WRITE ON CACHE BOOL "Enable Prometheus remote write plugin")
set(FLB_OUT_S3 ON CACHE BOOL "Enable AWS S3 output plugin")
set(FLB_OUT_VIVO_EXPORTER ON CACHE BOOL "Enabel Vivo exporter output plugin")
set(FLB_OUT_WEBSOCKET ON CACHE BOOL "Enable Websocket output plugin")
set(FLB_OUT_CHRONICLE ON CACHE BOOL "Enable Google Chronicle output plugin")
set(FLB_FILTER_ALTER_SIZE OFF CACHE BOOL "Enable alter_size filter")
set(FLB_FILTER_AWS OFF CACHE BOOL "Enable aws filter")
set(FLB_FILTER_ECS OFF CACHE BOOL "Enable AWS ECS filter")
set(FLB_FILTER_CHECKLIST OFF CACHE BOOL "Enable checklist filter")
set(FLB_FILTER_EXPECT OFF CACHE BOOL "Enable expect filter")
set(FLB_FILTER_GREP OFF CACHE BOOL "Enable grep filter")
set(FLB_FILTER_MODIFY OFF CACHE BOOL "Enable modify filter")
set(FLB_FILTER_STDOUT OFF CACHE BOOL "Enable stdout filter")
set(FLB_FILTER_PARSER ON CACHE BOOL "Enable parser filter")
set(FLB_FILTER_KUBERNETES OFF CACHE BOOL "Enable kubernetes filter")
set(FLB_FILTER_REWRITE_TAG OFF CACHE BOOL "Enable tag rewrite filter")
set(FLB_FILTER_THROTTLE OFF CACHE BOOL "Enable throttle filter")
set(FLB_FILTER_THROTTLE_SIZE OFF CACHE BOOL "Enable throttle size filter")
set(FLB_FILTER_TYPE_CONVERTER OFF CACHE BOOL "Enable type converter filter")
set(FLB_FILTER_MULTILINE OFF CACHE BOOL "Enable multiline filter")
set(FLB_FILTER_NEST OFF CACHE BOOL "Enable nest filter")
set(FLB_FILTER_LOG_TO_METRICS OFF CACHE BOOL "Enable log-derived metrics filter")
set(FLB_FILTER_LUA OFF CACHE BOOL "Enable Lua scripting filter")
set(FLB_FILTER_LUA_USE_MPACK OFF CACHE BOOL "Enable mpack on the lua filter")
set(FLB_FILTER_RECORD_MODIFIER ON CACHE BOOL "Enable record_modifier filter")
set(FLB_FILTER_TENSORFLOW OFF CACHE BOOL "Enable tensorflow filter")
set(FLB_FILTER_GEOIP2 OFF CACHE BOOL "Enable geoip2 filter")
set(FLB_FILTER_NIGHTFALL OFF CACHE BOOL "Enable Nightfall filter")
set(FLB_FILTER_WASM OFF CACHE BOOL "Enable WASM filter")
set(FLB_PROCESSOR_LABELS OFF CACHE BOOL "Enable metrics label manipulation processor")
set(FLB_PROCESSOR_ATTRIBUTES OFF CACHE BOOL "Enable atributes manipulation processor")

View File

@ -0,0 +1,10 @@
diff --git a/cmake/luajit.cmake b/cmake/luajit.cmake
index b6774eb..f8042ae 100644
--- a/cmake/luajit.cmake
+++ b/cmake/luajit.cmake
@@ -1,4 +1,4 @@
# luajit cmake
option(LUAJIT_DIR "Path of LuaJIT 2.1 source dir" ON)
set(LUAJIT_DIR ${FLB_PATH_ROOT_SOURCE}/${FLB_PATH_LIB_LUAJIT})
-add_subdirectory("lib/luajit-cmake")
+add_subdirectory("lib/luajit-cmake" EXCLUDE_FROM_ALL)

View File

@ -0,0 +1,52 @@
diff --git a/src/flb_log.c b/src/flb_log.c
index d004af8af..6ed27b8c6 100644
--- a/src/flb_log.c
+++ b/src/flb_log.c
@@ -509,31 +509,31 @@ int flb_log_construct(struct log_message *msg, int *ret_len,
switch (type) {
case FLB_LOG_HELP:
- header_title = "help";
+ header_title = "HELP";
header_color = ANSI_CYAN;
break;
case FLB_LOG_INFO:
- header_title = "info";
+ header_title = "INFO";
header_color = ANSI_GREEN;
break;
case FLB_LOG_WARN:
- header_title = "warn";
+ header_title = "WARN";
header_color = ANSI_YELLOW;
break;
case FLB_LOG_ERROR:
- header_title = "error";
+ header_title = "ERROR";
header_color = ANSI_RED;
break;
case FLB_LOG_DEBUG:
- header_title = "debug";
+ header_title = "DEBUG";
header_color = ANSI_YELLOW;
break;
case FLB_LOG_IDEBUG:
- header_title = "debug";
+ header_title = "DEBUG";
header_color = ANSI_CYAN;
break;
case FLB_LOG_TRACE:
- header_title = "trace";
+ header_title = "TRACE";
header_color = ANSI_BLUE;
break;
}
@@ -559,7 +559,7 @@ int flb_log_construct(struct log_message *msg, int *ret_len,
}
len = snprintf(msg->msg, sizeof(msg->msg) - 1,
- "%s[%s%i/%02i/%02i %02i:%02i:%02i%s]%s [%s%5s%s] ",
+ "%s%s%i-%02i-%02i %02i:%02i:%02i%s:%s fluent-bit %s%s%s: ",
/* time */ /* type */
/* time variables */

View File

@ -0,0 +1,15 @@
--- a/src/flb_network.c
+++ b/src/flb_network.c
@@ -523,9 +523,10 @@
}
/* Connection is broken, not much to do here */
- str = strerror_r(error, so_error_buf, sizeof(so_error_buf));
+ /* XXX: XSI */
+ int _err = strerror_r(error, so_error_buf, sizeof(so_error_buf));
flb_error("[net] TCP connection failed: %s:%i (%s)",
- u->tcp_host, u->tcp_port, str);
+ u->tcp_host, u->tcp_port, so_error_buf);
return -1;
}
}

755
logsmanagement/functions.c Normal file
View File

@ -0,0 +1,755 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file functions.c
*
* @brief This is the file containing the implementation of the
* logs management functions API.
*/
#include "functions.h"
#include "helper.h"
#include "query.h"
#define LOGS_MANAG_MAX_PARAMS 100
#define LOGS_MANAGEMENT_DEFAULT_QUERY_DURATION_IN_SEC 3600
#define LOGS_MANAGEMENT_DEFAULT_ITEMS_PER_QUERY 200
#define LOGS_MANAG_FUNC_PARAM_HELP "help"
#define LOGS_MANAG_FUNC_PARAM_ANCHOR "anchor"
#define LOGS_MANAG_FUNC_PARAM_LAST "last"
#define LOGS_MANAG_FUNC_PARAM_QUERY "query"
#define LOGS_MANAG_FUNC_PARAM_FACETS "facets"
#define LOGS_MANAG_FUNC_PARAM_HISTOGRAM "histogram"
#define LOGS_MANAG_FUNC_PARAM_DIRECTION "direction"
#define LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE "if_modified_since"
#define LOGS_MANAG_FUNC_PARAM_DATA_ONLY "data_only"
#define LOGS_MANAG_FUNC_PARAM_SOURCE "source"
#define LOGS_MANAG_FUNC_PARAM_INFO "info"
#define LOGS_MANAG_FUNC_PARAM_ID "id"
#define LOGS_MANAG_FUNC_PARAM_PROGRESS "progress"
#define LOGS_MANAG_FUNC_PARAM_SLICE "slice"
#define LOGS_MANAG_FUNC_PARAM_DELTA "delta"
#define LOGS_MANAG_FUNC_PARAM_TAIL "tail"
#define LOGS_MANAG_DEFAULT_DIRECTION FACETS_ANCHOR_DIRECTION_BACKWARD
#define FACET_MAX_VALUE_LENGTH 8192
#define FUNCTION_LOGSMANAGEMENT_HELP_LONG \
LOGS_MANAGEMENT_PLUGIN_STR " / " LOGS_MANAG_FUNC_NAME"\n" \
"\n" \
FUNCTION_LOGSMANAGEMENT_HELP_SHORT"\n" \
"\n" \
"The following parameters are supported::\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_HELP"\n" \
" Shows this help message\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_INFO"\n" \
" Request initial configuration information about the plugin.\n" \
" The key entity returned is the required_params array, which includes\n" \
" all the available "LOGS_MANAG_FUNC_NAME" sources.\n" \
" When `"LOGS_MANAG_FUNC_PARAM_INFO"` is requested, all other parameters are ignored.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_DATA_ONLY":true or "LOGS_MANAG_FUNC_PARAM_DATA_ONLY":false\n" \
" Quickly respond with data requested, without generating a\n" \
" `histogram`, `facets` counters and `items`.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_SOURCE":SOURCE\n" \
" Query only the specified "LOGS_MANAG_FUNC_NAME" sources.\n" \
" Do an `"LOGS_MANAG_FUNC_PARAM_INFO"` query to find the sources.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_BEFORE":TIMESTAMP_IN_SECONDS\n" \
" Absolute or relative (to now) timestamp in seconds, to start the query.\n" \
" The query is always executed from the most recent to the oldest log entry.\n" \
" If not given the default is: now.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_AFTER":TIMESTAMP_IN_SECONDS\n" \
" Absolute or relative (to `before`) timestamp in seconds, to end the query.\n" \
" If not given, the default is "LOGS_MANAG_STR(-LOGS_MANAGEMENT_DEFAULT_QUERY_DURATION_IN_SEC)".\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_LAST":ITEMS\n" \
" The number of items to return.\n" \
" The default is "LOGS_MANAG_STR(LOGS_MANAGEMENT_DEFAULT_ITEMS_PER_QUERY)".\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_ANCHOR":TIMESTAMP_IN_MICROSECONDS\n" \
" Return items relative to this timestamp.\n" \
" The exact items to be returned depend on the query `"LOGS_MANAG_FUNC_PARAM_DIRECTION"`.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_DIRECTION":forward or "LOGS_MANAG_FUNC_PARAM_DIRECTION":backward\n" \
" When set to `backward` (default) the items returned are the newest before the\n" \
" `"LOGS_MANAG_FUNC_PARAM_ANCHOR"`, (or `"LOGS_MANAG_FUNC_PARAM_BEFORE"` if `"LOGS_MANAG_FUNC_PARAM_ANCHOR"` is not set)\n" \
" When set to `forward` the items returned are the oldest after the\n" \
" `"LOGS_MANAG_FUNC_PARAM_ANCHOR"`, (or `"LOGS_MANAG_FUNC_PARAM_AFTER"` if `"LOGS_MANAG_FUNC_PARAM_ANCHOR"` is not set)\n" \
" The default is: backward\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_QUERY":SIMPLE_PATTERN\n" \
" Do a full text search to find the log entries matching the pattern given.\n" \
" The plugin is searching for matches on all fields of the database.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE":TIMESTAMP_IN_MICROSECONDS\n" \
" Each successful response, includes a `last_modified` field.\n" \
" By providing the timestamp to the `"LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE"` parameter,\n" \
" the plugin will return 200 with a successful response, or 304 if the source has not\n" \
" been modified since that timestamp.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_HISTOGRAM":facet_id\n" \
" Use the given `facet_id` for the histogram.\n" \
" This parameter is ignored in `"LOGS_MANAG_FUNC_PARAM_DATA_ONLY"` mode.\n" \
"\n" \
" "LOGS_MANAG_FUNC_PARAM_FACETS":facet_id1,facet_id2,facet_id3,...\n" \
" Add the given facets to the list of fields for which analysis is required.\n" \
" The plugin will offer both a histogram and facet value counters for its values.\n" \
" This parameter is ignored in `"LOGS_MANAG_FUNC_PARAM_DATA_ONLY"` mode.\n" \
"\n" \
" facet_id:value_id1,value_id2,value_id3,...\n" \
" Apply filters to the query, based on the facet IDs returned.\n" \
" Each `facet_id` can be given once, but multiple `facet_ids` can be given.\n" \
"\n"
extern netdata_mutex_t stdout_mut;
static DICTIONARY *function_query_status_dict = NULL;
static DICTIONARY *used_hashes_registry = NULL;
typedef struct function_query_status {
bool *cancelled; // a pointer to the cancelling boolean
usec_t stop_monotonic_ut;
usec_t started_monotonic_ut;
// request
// SD_JOURNAL_FILE_SOURCE_TYPE source_type;
STRING *source;
usec_t after_ut;
usec_t before_ut;
struct {
usec_t start_ut;
usec_t stop_ut;
} anchor;
FACETS_ANCHOR_DIRECTION direction;
size_t entries;
usec_t if_modified_since;
bool delta;
bool tail;
bool data_only;
bool slice;
size_t filters;
usec_t last_modified;
const char *query;
const char *histogram;
// per file progress info
size_t cached_count;
// progress statistics
usec_t matches_setup_ut;
size_t rows_useful;
size_t rows_read;
size_t bytes_read;
size_t files_matched;
size_t file_working;
} FUNCTION_QUERY_STATUS;
#define LOGS_MANAG_KEYS_INCLUDED_IN_FACETS \
"log_source" \
"|log_type" \
"|filename" \
"|basename" \
"|chartname" \
"|message" \
""
static void logsmanagement_function_facets(const char *transaction, char *function, int timeout, bool *cancelled){
struct rusage start, end;
getrusage(RUSAGE_THREAD, &start);
const logs_qry_res_err_t *ret = &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_SERVER_ERR];
BUFFER *wb = buffer_create(0, NULL);
buffer_flush(wb);
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_MINIFY);
usec_t now_monotonic_ut = now_monotonic_usec();
FUNCTION_QUERY_STATUS tmp_fqs = {
.cancelled = cancelled,
.started_monotonic_ut = now_monotonic_ut,
.stop_monotonic_ut = now_monotonic_ut + (timeout * USEC_PER_SEC),
};
FUNCTION_QUERY_STATUS *fqs = NULL;
const DICTIONARY_ITEM *fqs_item = NULL;
FACETS *facets = facets_create(50, FACETS_OPTION_ALL_KEYS_FTS,
NULL,
LOGS_MANAG_KEYS_INCLUDED_IN_FACETS,
NULL);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_INFO);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_SOURCE);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_AFTER);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_BEFORE);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_ANCHOR);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_DIRECTION);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_LAST);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_QUERY);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_FACETS);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_HISTOGRAM);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_DATA_ONLY);
// facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_ID);
// facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_PROGRESS);
facets_accepted_param(facets, LOGS_MANAG_FUNC_PARAM_DELTA);
// facets_accepted_param(facets, JOURNAL_PARAMETER_TAIL);
// #ifdef HAVE_SD_JOURNAL_RESTART_FIELDS
// facets_accepted_param(facets, JOURNAL_PARAMETER_SLICE);
// #endif // HAVE_SD_JOURNAL_RESTART_FIELDS
// register the fields in the order you want them on the dashboard
facets_register_key_name(facets, "log_source", FACET_KEY_OPTION_FACET |
FACET_KEY_OPTION_FTS);
facets_register_key_name(facets, "log_type", FACET_KEY_OPTION_FACET |
FACET_KEY_OPTION_FTS);
facets_register_key_name(facets, "filename", FACET_KEY_OPTION_FACET |
FACET_KEY_OPTION_FTS);
facets_register_key_name(facets, "basename", FACET_KEY_OPTION_FACET |
FACET_KEY_OPTION_FTS);
facets_register_key_name(facets, "chartname", FACET_KEY_OPTION_VISIBLE |
FACET_KEY_OPTION_FACET |
FACET_KEY_OPTION_FTS);
facets_register_key_name(facets, "message", FACET_KEY_OPTION_NEVER_FACET |
FACET_KEY_OPTION_MAIN_TEXT |
FACET_KEY_OPTION_VISIBLE |
FACET_KEY_OPTION_FTS);
bool info = false,
data_only = false,
progress = false,
/* slice = true, */
delta = false,
tail = false;
time_t after_s = 0, before_s = 0;
usec_t anchor = 0;
usec_t if_modified_since = 0;
size_t last = 0;
FACETS_ANCHOR_DIRECTION direction = LOGS_MANAG_DEFAULT_DIRECTION;
const char *query = NULL;
const char *chart = NULL;
const char *source = NULL;
const char *progress_id = NULL;
// SD_JOURNAL_FILE_SOURCE_TYPE source_type = SDJF_ALL;
// size_t filters = 0;
buffer_json_member_add_object(wb, "_request");
logs_query_params_t query_params = {0};
unsigned long req_quota = 0;
// unsigned int fn_off = 0, cn_off = 0;
char *words[LOGS_MANAG_MAX_PARAMS] = { NULL };
size_t num_words = quoted_strings_splitter_pluginsd(function, words, LOGS_MANAG_MAX_PARAMS);
for(int i = 1; i < LOGS_MANAG_MAX_PARAMS ; i++) {
char *keyword = get_word(words, num_words, i);
if(!keyword) break;
if(!strcmp(keyword, LOGS_MANAG_FUNC_PARAM_HELP)){
BUFFER *wb = buffer_create(0, NULL);
buffer_sprintf(wb, FUNCTION_LOGSMANAGEMENT_HELP_LONG);
netdata_mutex_lock(&stdout_mut);
pluginsd_function_result_to_stdout(transaction, HTTP_RESP_OK, "text/plain", now_realtime_sec() + 3600, wb);
netdata_mutex_unlock(&stdout_mut);
buffer_free(wb);
goto cleanup;
}
else if(!strcmp(keyword, LOGS_MANAG_FUNC_PARAM_INFO)){
info = true;
}
else if(!strcmp(keyword, LOGS_MANAG_FUNC_PARAM_PROGRESS)){
progress = true;
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_DELTA ":", sizeof(LOGS_MANAG_FUNC_PARAM_DELTA ":") - 1) == 0) {
char *v = &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_DELTA ":") - 1];
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
delta = false;
else
delta = true;
}
// else if(strncmp(keyword, JOURNAL_PARAMETER_TAIL ":", sizeof(JOURNAL_PARAMETER_TAIL ":") - 1) == 0) {
// char *v = &keyword[sizeof(JOURNAL_PARAMETER_TAIL ":") - 1];
// if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
// tail = false;
// else
// tail = true;
// }
else if(!strncmp( keyword,
LOGS_MANAG_FUNC_PARAM_DATA_ONLY ":",
sizeof(LOGS_MANAG_FUNC_PARAM_DATA_ONLY ":") - 1)) {
char *v = &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_DATA_ONLY ":") - 1];
if(!strcmp(v, "false") || !strcmp(v, "no") || !strcmp(v, "0"))
data_only = false;
else
data_only = true;
}
// else if(strncmp(keyword, JOURNAL_PARAMETER_SLICE ":", sizeof(JOURNAL_PARAMETER_SLICE ":") - 1) == 0) {
// char *v = &keyword[sizeof(JOURNAL_PARAMETER_SLICE ":") - 1];
// if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
// slice = false;
// else
// slice = true;
// }
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_ID ":", sizeof(LOGS_MANAG_FUNC_PARAM_ID ":") - 1) == 0) {
char *id = &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_ID ":") - 1];
if(*id)
progress_id = id;
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_SOURCE ":", sizeof(LOGS_MANAG_FUNC_PARAM_SOURCE ":") - 1) == 0) {
source = !strcmp("all", &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_SOURCE ":") - 1]) ?
NULL : &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_SOURCE ":") - 1];
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_AFTER ":", sizeof(LOGS_MANAG_FUNC_PARAM_AFTER ":") - 1) == 0) {
after_s = str2l(&keyword[sizeof(LOGS_MANAG_FUNC_PARAM_AFTER ":") - 1]);
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_BEFORE ":", sizeof(LOGS_MANAG_FUNC_PARAM_BEFORE ":") - 1) == 0) {
before_s = str2l(&keyword[sizeof(LOGS_MANAG_FUNC_PARAM_BEFORE ":") - 1]);
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE ":", sizeof(LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE ":") - 1) == 0) {
if_modified_since = str2ull(&keyword[sizeof(LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE ":") - 1], NULL);
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_ANCHOR ":", sizeof(LOGS_MANAG_FUNC_PARAM_ANCHOR ":") - 1) == 0) {
anchor = str2ull(&keyword[sizeof(LOGS_MANAG_FUNC_PARAM_ANCHOR ":") - 1], NULL);
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_DIRECTION ":", sizeof(LOGS_MANAG_FUNC_PARAM_DIRECTION ":") - 1) == 0) {
direction = !strcasecmp(&keyword[sizeof(LOGS_MANAG_FUNC_PARAM_DIRECTION ":") - 1], "forward") ?
FACETS_ANCHOR_DIRECTION_FORWARD : FACETS_ANCHOR_DIRECTION_BACKWARD;
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_LAST ":", sizeof(LOGS_MANAG_FUNC_PARAM_LAST ":") - 1) == 0) {
last = str2ul(&keyword[sizeof(LOGS_MANAG_FUNC_PARAM_LAST ":") - 1]);
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_QUERY ":", sizeof(LOGS_MANAG_FUNC_PARAM_QUERY ":") - 1) == 0) {
query= &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_QUERY ":") - 1];
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_HISTOGRAM ":", sizeof(LOGS_MANAG_FUNC_PARAM_HISTOGRAM ":") - 1) == 0) {
chart = &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_HISTOGRAM ":") - 1];
}
else if(strncmp(keyword, LOGS_MANAG_FUNC_PARAM_FACETS ":", sizeof(LOGS_MANAG_FUNC_PARAM_FACETS ":") - 1) == 0) {
char *value = &keyword[sizeof(LOGS_MANAG_FUNC_PARAM_FACETS ":") - 1];
if(*value) {
buffer_json_member_add_array(wb, LOGS_MANAG_FUNC_PARAM_FACETS);
while(value) {
char *sep = strchr(value, ',');
if(sep)
*sep++ = '\0';
facets_register_facet_id(facets, value, FACET_KEY_OPTION_FACET|FACET_KEY_OPTION_FTS|FACET_KEY_OPTION_REORDER);
buffer_json_add_array_item_string(wb, value);
value = sep;
}
buffer_json_array_close(wb); // LOGS_MANAG_FUNC_PARAM_FACETS
}
}
else {
char *value = strchr(keyword, ':');
if(value) {
*value++ = '\0';
buffer_json_member_add_array(wb, keyword);
while(value) {
char *sep = strchr(value, ',');
if(sep)
*sep++ = '\0';
facets_register_facet_id_filter(facets, keyword, value, FACET_KEY_OPTION_FACET|FACET_KEY_OPTION_FTS|FACET_KEY_OPTION_REORDER);
buffer_json_add_array_item_string(wb, value);
// filters++;
value = sep;
}
buffer_json_array_close(wb); // keyword
}
}
}
// ------------------------------------------------------------------------
// put this request into the progress db
if(progress_id && *progress_id) {
fqs_item = dictionary_set_and_acquire_item(function_query_status_dict, progress_id, &tmp_fqs, sizeof(tmp_fqs));
fqs = dictionary_acquired_item_value(fqs_item);
}
else {
// no progress id given, proceed without registering our progress in the dictionary
fqs = &tmp_fqs;
fqs_item = NULL;
}
// ------------------------------------------------------------------------
// validate parameters
time_t now_s = now_realtime_sec();
time_t expires = now_s + 1;
if(!after_s && !before_s) {
before_s = now_s;
after_s = before_s - LOGS_MANAGEMENT_DEFAULT_QUERY_DURATION_IN_SEC;
}
else
rrdr_relative_window_to_absolute(&after_s, &before_s, now_s);
if(after_s > before_s) {
time_t tmp = after_s;
after_s = before_s;
before_s = tmp;
}
if(after_s == before_s)
after_s = before_s - LOGS_MANAGEMENT_DEFAULT_QUERY_DURATION_IN_SEC;
if(!last)
last = LOGS_MANAGEMENT_DEFAULT_ITEMS_PER_QUERY;
// ------------------------------------------------------------------------
// set query time-frame, anchors and direction
fqs->after_ut = after_s * USEC_PER_SEC;
fqs->before_ut = (before_s * USEC_PER_SEC) + USEC_PER_SEC - 1;
fqs->if_modified_since = if_modified_since;
fqs->data_only = data_only;
fqs->delta = (fqs->data_only) ? delta : false;
fqs->tail = (fqs->data_only && fqs->if_modified_since) ? tail : false;
fqs->source = string_strdupz(source);
// fqs->source_type = source_type;
fqs->entries = last;
fqs->last_modified = 0;
// fqs->filters = filters;
fqs->query = (query && *query) ? query : NULL;
fqs->histogram = (chart && *chart) ? chart : NULL;
fqs->direction = direction;
fqs->anchor.start_ut = anchor;
fqs->anchor.stop_ut = 0;
if(fqs->anchor.start_ut && fqs->tail) {
// a tail request
// we need the top X entries from BEFORE
// but, we need to calculate the facets and the
// histogram up to the anchor
fqs->direction = direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
fqs->anchor.start_ut = 0;
fqs->anchor.stop_ut = anchor;
}
if(anchor && anchor < fqs->after_ut) {
// log_fqs(fqs, "received anchor is too small for query timeframe, ignoring anchor");
anchor = 0;
fqs->anchor.start_ut = 0;
fqs->anchor.stop_ut = 0;
fqs->direction = direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
}
else if(anchor > fqs->before_ut) {
// log_fqs(fqs, "received anchor is too big for query timeframe, ignoring anchor");
anchor = 0;
fqs->anchor.start_ut = 0;
fqs->anchor.stop_ut = 0;
fqs->direction = direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
}
facets_set_anchor(facets, fqs->anchor.start_ut, fqs->anchor.stop_ut, fqs->direction);
facets_set_additional_options(facets,
((fqs->data_only) ? FACETS_OPTION_DATA_ONLY : 0) |
((fqs->delta) ? FACETS_OPTION_SHOW_DELTAS : 0));
// ------------------------------------------------------------------------
// set the rest of the query parameters
facets_set_items(facets, fqs->entries);
facets_set_query(facets, fqs->query);
// #ifdef HAVE_SD_JOURNAL_RESTART_FIELDS
// fqs->slice = slice;
// if(slice)
// facets_enable_slice_mode(facets);
// #else
// fqs->slice = false;
// #endif
if(fqs->histogram)
facets_set_timeframe_and_histogram_by_id(facets, fqs->histogram, fqs->after_ut, fqs->before_ut);
else
facets_set_timeframe_and_histogram_by_name(facets, chart ? chart : "chartname", fqs->after_ut, fqs->before_ut);
// ------------------------------------------------------------------------
// complete the request object
buffer_json_member_add_boolean(wb, LOGS_MANAG_FUNC_PARAM_INFO, false);
buffer_json_member_add_boolean(wb, LOGS_MANAG_FUNC_PARAM_SLICE, fqs->slice);
buffer_json_member_add_boolean(wb, LOGS_MANAG_FUNC_PARAM_DATA_ONLY, fqs->data_only);
buffer_json_member_add_boolean(wb, LOGS_MANAG_FUNC_PARAM_PROGRESS, false);
buffer_json_member_add_boolean(wb, LOGS_MANAG_FUNC_PARAM_DELTA, fqs->delta);
buffer_json_member_add_boolean(wb, LOGS_MANAG_FUNC_PARAM_TAIL, fqs->tail);
buffer_json_member_add_string(wb, LOGS_MANAG_FUNC_PARAM_ID, progress_id);
buffer_json_member_add_string(wb, LOGS_MANAG_FUNC_PARAM_SOURCE, string2str(fqs->source));
// buffer_json_member_add_uint64(wb, "source_type", fqs->source_type);
buffer_json_member_add_uint64(wb, LOGS_MANAG_FUNC_PARAM_AFTER, fqs->after_ut / USEC_PER_SEC);
buffer_json_member_add_uint64(wb, LOGS_MANAG_FUNC_PARAM_BEFORE, fqs->before_ut / USEC_PER_SEC);
buffer_json_member_add_uint64(wb, LOGS_MANAG_FUNC_PARAM_IF_MODIFIED_SINCE, fqs->if_modified_since);
buffer_json_member_add_uint64(wb, LOGS_MANAG_FUNC_PARAM_ANCHOR, anchor);
buffer_json_member_add_string(wb, LOGS_MANAG_FUNC_PARAM_DIRECTION,
fqs->direction == FACETS_ANCHOR_DIRECTION_FORWARD ? "forward" : "backward");
buffer_json_member_add_uint64(wb, LOGS_MANAG_FUNC_PARAM_LAST, fqs->entries);
buffer_json_member_add_string(wb, LOGS_MANAG_FUNC_PARAM_QUERY, fqs->query);
buffer_json_member_add_string(wb, LOGS_MANAG_FUNC_PARAM_HISTOGRAM, fqs->histogram);
buffer_json_object_close(wb); // request
// buffer_json_journal_versions(wb);
// ------------------------------------------------------------------------
// run the request
if(info) {
facets_accepted_parameters_to_json_array(facets, wb, false);
buffer_json_member_add_array(wb, "required_params");
{
buffer_json_add_array_item_object(wb);
{
buffer_json_member_add_string(wb, "id", "source");
buffer_json_member_add_string(wb, "name", "source");
buffer_json_member_add_string(wb, "help", "Select the Logs Management source to query");
buffer_json_member_add_string(wb, "type", "select");
buffer_json_member_add_array(wb, "options");
ret = fetch_log_sources(wb);
buffer_json_array_close(wb); // options array
}
buffer_json_object_close(wb); // required params object
}
buffer_json_array_close(wb); // required_params array
facets_table_config(wb);
buffer_json_member_add_uint64(wb, "status", HTTP_RESP_OK);
buffer_json_member_add_string(wb, "type", "table");
buffer_json_member_add_string(wb, "help", FUNCTION_LOGSMANAGEMENT_HELP_SHORT);
buffer_json_finalize(wb);
goto output;
}
if(progress) {
// TODO: Add progress function
// function_logsmanagement_progress(wb, transaction, progress_id);
goto cleanup;
}
if(!req_quota)
query_params.quota = LOGS_MANAG_QUERY_QUOTA_DEFAULT;
else if(req_quota > LOGS_MANAG_QUERY_QUOTA_MAX)
query_params.quota = LOGS_MANAG_QUERY_QUOTA_MAX;
else query_params.quota = req_quota;
if(fqs->source)
query_params.chartname[0] = (char *) string2str(fqs->source);
query_params.order_by_asc = 0;
// NOTE: Always perform descending timestamp query, req_from_ts >= req_to_ts.
if(fqs->direction == FACETS_ANCHOR_DIRECTION_BACKWARD){
query_params.req_from_ts =
(fqs->data_only && fqs->anchor.start_ut) ? fqs->anchor.start_ut / USEC_PER_MS : before_s * MSEC_PER_SEC;
query_params.req_to_ts =
(fqs->data_only && fqs->anchor.stop_ut) ? fqs->anchor.stop_ut / USEC_PER_MS : after_s * MSEC_PER_SEC;
}
else{
query_params.req_from_ts =
(fqs->data_only && fqs->anchor.stop_ut) ? fqs->anchor.stop_ut / USEC_PER_MS : before_s * MSEC_PER_SEC;
query_params.req_to_ts =
(fqs->data_only && fqs->anchor.start_ut) ? fqs->anchor.start_ut / USEC_PER_MS : after_s * MSEC_PER_SEC;
}
query_params.stop_monotonic_ut = now_monotonic_usec() + (timeout - 1) * USEC_PER_SEC;
query_params.results_buff = buffer_create(query_params.quota, NULL);
facets_rows_begin(facets);
do{
if(query_params.act_to_ts)
query_params.req_from_ts = query_params.act_to_ts - 1000;
ret = execute_logs_manag_query(&query_params);
size_t res_off = 0;
logs_query_res_hdr_t *p_res_hdr;
while(query_params.results_buff->len - res_off > 0){
p_res_hdr = (logs_query_res_hdr_t *) &query_params.results_buff->buffer[res_off];
ssize_t remaining = p_res_hdr->text_size;
char *ls = &query_params.results_buff->buffer[res_off] + sizeof(*p_res_hdr) + p_res_hdr->text_size - 1;
*ls = '\0';
int timestamp_off = p_res_hdr->matches;
do{
do{
--remaining;
--ls;
} while(remaining > 0 && *ls != '\n');
*ls = '\0';
--remaining;
--ls;
usec_t timestamp = p_res_hdr->timestamp * USEC_PER_MS + --timestamp_off;
if(unlikely(!fqs->last_modified)) {
if(timestamp == if_modified_since){
ret = &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_UNMODIFIED];
goto output;
}
else
fqs->last_modified = timestamp;
}
facets_add_key_value(facets, "log_source", p_res_hdr->log_source[0] ? p_res_hdr->log_source : "-");
facets_add_key_value(facets, "log_type", p_res_hdr->log_type[0] ? p_res_hdr->log_type : "-");
facets_add_key_value(facets, "filename", p_res_hdr->filename[0] ? p_res_hdr->filename : "-");
facets_add_key_value(facets, "basename", p_res_hdr->basename[0] ? p_res_hdr->basename : "-");
facets_add_key_value(facets, "chartname", p_res_hdr->chartname[0] ? p_res_hdr->chartname : "-");
size_t ls_len = strlen(ls + 2);
facets_add_key_value_length(facets, "message", sizeof("message") - 1,
ls + 2, ls_len <= FACET_MAX_VALUE_LENGTH ? ls_len : FACET_MAX_VALUE_LENGTH);
facets_row_finished(facets, timestamp);
} while(remaining > 0);
res_off += sizeof(*p_res_hdr) + p_res_hdr->text_size;
}
buffer_flush(query_params.results_buff);
} while(query_params.act_to_ts > query_params.req_to_ts);
m_assert(query_params.req_from_ts == query_params.act_from_ts, "query_params.req_from_ts != query_params.act_from_ts");
m_assert(query_params.req_to_ts == query_params.act_to_ts , "query_params.req_to_ts != query_params.act_to_ts");
getrusage(RUSAGE_THREAD, &end);
time_t user_time = end.ru_utime.tv_sec * USEC_PER_SEC + end.ru_utime.tv_usec -
start.ru_utime.tv_sec * USEC_PER_SEC - start.ru_utime.tv_usec;
time_t sys_time = end.ru_stime.tv_sec * USEC_PER_SEC + end.ru_stime.tv_usec -
start.ru_stime.tv_sec * USEC_PER_SEC - start.ru_stime.tv_usec;
buffer_json_member_add_object(wb, "logs_management_meta");
buffer_json_member_add_string(wb, "api_version", LOGS_QRY_VERSION);
buffer_json_member_add_uint64(wb, "num_lines", query_params.num_lines);
buffer_json_member_add_uint64(wb, "user_time", user_time);
buffer_json_member_add_uint64(wb, "system_time", sys_time);
buffer_json_member_add_uint64(wb, "total_time", user_time + sys_time);
buffer_json_member_add_uint64(wb, "error_code", (uint64_t) ret->err_code);
buffer_json_member_add_string(wb, "error_string", ret->err_str);
buffer_json_object_close(wb); // logs_management_meta
buffer_json_member_add_uint64(wb, "status", ret->http_code);
buffer_json_member_add_boolean(wb, "partial", ret->http_code != HTTP_RESP_OK);
buffer_json_member_add_string(wb, "type", "table");
if(!fqs->data_only) {
buffer_json_member_add_time_t(wb, "update_every", 1);
buffer_json_member_add_string(wb, "help", FUNCTION_LOGSMANAGEMENT_HELP_SHORT);
}
if(!fqs->data_only || fqs->tail)
buffer_json_member_add_uint64(wb, "last_modified", fqs->last_modified);
facets_sort_and_reorder_keys(facets);
facets_report(facets, wb, used_hashes_registry);
buffer_json_member_add_time_t(wb, "expires", now_realtime_sec() + (fqs->data_only ? 3600 : 0));
buffer_json_finalize(wb); // logs_management_meta
// ------------------------------------------------------------------------
// cleanup query params
string_freez(fqs->source);
fqs->source = NULL;
// ------------------------------------------------------------------------
// handle error response
output:
netdata_mutex_lock(&stdout_mut);
if(ret->http_code != HTTP_RESP_OK)
pluginsd_function_json_error_to_stdout(transaction, ret->http_code, ret->err_str);
else
pluginsd_function_result_to_stdout(transaction, ret->http_code, "application/json", expires, wb);
netdata_mutex_unlock(&stdout_mut);
cleanup:
facets_destroy(facets);
buffer_free(query_params.results_buff);
buffer_free(wb);
if(fqs_item) {
dictionary_del(function_query_status_dict, dictionary_acquired_item_name(fqs_item));
dictionary_acquired_item_release(function_query_status_dict, fqs_item);
dictionary_garbage_collect(function_query_status_dict);
}
}
struct functions_evloop_globals *logsmanagement_func_facets_init(bool *p_logsmanagement_should_exit){
function_query_status_dict = dictionary_create_advanced(
DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
NULL, sizeof(FUNCTION_QUERY_STATUS));
used_hashes_registry = dictionary_create(DICT_OPTION_DONT_OVERWRITE_VALUE);
netdata_mutex_lock(&stdout_mut);
fprintf(stdout, PLUGINSD_KEYWORD_FUNCTION " GLOBAL \"%s\" %d \"%s\"\n",
LOGS_MANAG_FUNC_NAME,
LOGS_MANAG_QUERY_TIMEOUT_DEFAULT,
FUNCTION_LOGSMANAGEMENT_HELP_SHORT);
netdata_mutex_unlock(&stdout_mut);
struct functions_evloop_globals *wg = functions_evloop_init(1, "LGSMNGM",
&stdout_mut,
p_logsmanagement_should_exit);
functions_evloop_add_function( wg, LOGS_MANAG_FUNC_NAME,
logsmanagement_function_facets,
LOGS_MANAG_QUERY_TIMEOUT_DEFAULT);
return wg;
}

View File

@ -0,0 +1,22 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file functions.h
* @brief Header of functions.c
*/
#ifndef FUNCTIONS_H_
#define FUNCTIONS_H_
#include "../database/rrdfunctions.h"
#define LOGS_MANAG_FUNC_NAME "logs-management"
#define FUNCTION_LOGSMANAGEMENT_HELP_SHORT "View, search and analyze logs monitored through the logs management engine."
int logsmanagement_function_execute_cb( BUFFER *dest_wb, int timeout,
const char *function, void *collector_data,
void (*callback)(BUFFER *wb, int code, void *callback_data),
void *callback_data);
struct functions_evloop_globals *logsmanagement_func_facets_init(bool *p_logsmanagement_should_exit);
#endif // FUNCTIONS_H_

238
logsmanagement/helper.h Normal file
View File

@ -0,0 +1,238 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file helper.h
* @brief Includes helper functions for the Logs Management project.
*/
#ifndef HELPER_H_
#define HELPER_H_
#include "libnetdata/libnetdata.h"
#include <assert.h>
#define LOGS_MANAGEMENT_PLUGIN_STR "logs-management.plugin"
#define LOGS_MANAG_STR_HELPER(x) #x
#define LOGS_MANAG_STR(x) LOGS_MANAG_STR_HELPER(x)
#ifndef m_assert
#if defined(LOGS_MANAGEMENT_STRESS_TEST)
#define m_assert(expr, msg) assert(((void)(msg), (expr)))
#else
#define m_assert(expr, msg) do{} while(0)
#endif // LOGS_MANAGEMENT_STRESS_TEST
#endif // m_assert
/* Test if a timestamp is within a valid range
* 1649175852000 equals Tuesday, 5 April 2022 16:24:12,
* 2532788652000 equals Tuesday, 5 April 2050 16:24:12
*/
#define TEST_MS_TIMESTAMP_VALID(x) (((x) > 1649175852000 && (x) < 2532788652000)? 1:0)
#define TIMESTAMP_MS_STR_SIZE sizeof("1649175852000")
#ifdef ENABLE_LOGSMANAGEMENT_TESTS
#define UNIT_STATIC
#else
#define UNIT_STATIC static
#endif // ENABLE_LOGSMANAGEMENT_TESTS
#ifndef COMPILE_TIME_ASSERT // https://stackoverflow.com/questions/3385515/static-assert-in-c
#define STATIC_ASSERT(COND,MSG) typedef char static_assertion_##MSG[(!!(COND))*2-1]
// token pasting madness:
#define COMPILE_TIME_ASSERT3(X,L) STATIC_ASSERT(X,static_assertion_at_line_##L)
#define COMPILE_TIME_ASSERT2(X,L) COMPILE_TIME_ASSERT3(X,L)
#define COMPILE_TIME_ASSERT(X) COMPILE_TIME_ASSERT2(X,__LINE__)
#endif // COMPILE_TIME_ASSERT
#if defined(NETDATA_INTERNAL_CHECKS) && defined(LOGS_MANAGEMENT_STRESS_TEST)
#define debug_log(args...) netdata_logger(NDLS_COLLECTORS, NDLP_DEBUG, __FILE__, __FUNCTION__, __LINE__, ##args)
#else
#define debug_log(fmt, args...) do {} while(0)
#endif
/**
* @brief Extract file_basename from full file path
* @param path String containing the full path.
* @return Pointer to the file_basename string
*/
static inline char *get_basename(const char *const path) {
if(!path) return NULL;
char *s = strrchr(path, '/');
if (!s)
return strdupz(path);
else
return strdupz(s + 1);
}
typedef enum {
STR2XX_SUCCESS = 0,
STR2XX_OVERFLOW,
STR2XX_UNDERFLOW,
STR2XX_INCONVERTIBLE
} str2xx_errno;
/* Convert string s to int out.
* https://stackoverflow.com/questions/7021725/how-to-convert-a-string-to-integer-in-c
*
* @param[out] out The converted int. Cannot be NULL.
* @param[in] s Input string to be converted.
*
* The format is the same as strtol,
* except that the following are inconvertible:
* - empty string
* - leading whitespace
* - any trailing characters that are not part of the number
* Cannot be NULL.
*
* @param[in] base Base to interpret string in. Same range as strtol (2 to 36).
* @return Indicates if the operation succeeded, or why it failed.
*/
static inline str2xx_errno str2int(int *out, char *s, int base) {
char *end;
if (unlikely(s[0] == '\0' || isspace(s[0]))){
// debug_log( "str2int error: STR2XX_INCONVERTIBLE 1");
// m_assert(0, "str2int error: STR2XX_INCONVERTIBLE");
return STR2XX_INCONVERTIBLE;
}
errno = 0;
long l = strtol(s, &end, base);
/* Both checks are needed because INT_MAX == LONG_MAX is possible. */
if (unlikely(l > INT_MAX || (errno == ERANGE && l == LONG_MAX))){
debug_log( "str2int error: STR2XX_OVERFLOW");
// m_assert(0, "str2int error: STR2XX_OVERFLOW");
return STR2XX_OVERFLOW;
}
if (unlikely(l < INT_MIN || (errno == ERANGE && l == LONG_MIN))){
debug_log( "str2int error: STR2XX_UNDERFLOW");
// m_assert(0, "str2int error: STR2XX_UNDERFLOW");
return STR2XX_UNDERFLOW;
}
if (unlikely(*end != '\0')){
debug_log( "str2int error: STR2XX_INCONVERTIBLE 2");
// m_assert(0, "str2int error: STR2XX_INCONVERTIBLE 2");
return STR2XX_INCONVERTIBLE;
}
*out = l;
return STR2XX_SUCCESS;
}
static inline str2xx_errno str2float(float *out, char *s) {
char *end;
if (unlikely(s[0] == '\0' || isspace(s[0]))){
// debug_log( "str2float error: STR2XX_INCONVERTIBLE 1\n");
// m_assert(0, "str2float error: STR2XX_INCONVERTIBLE");
return STR2XX_INCONVERTIBLE;
}
errno = 0;
float f = strtof(s, &end);
/* Both checks are needed because INT_MAX == LONG_MAX is possible. */
if (unlikely((errno == ERANGE && f == HUGE_VALF))){
debug_log( "str2float error: STR2XX_OVERFLOW\n");
// m_assert(0, "str2float error: STR2XX_OVERFLOW");
return STR2XX_OVERFLOW;
}
if (unlikely((errno == ERANGE && f == -HUGE_VALF))){
debug_log( "str2float error: STR2XX_UNDERFLOW\n");
// m_assert(0, "str2float error: STR2XX_UNDERFLOW");
return STR2XX_UNDERFLOW;
}
if (unlikely((*end != '\0'))){
debug_log( "str2float error: STR2XX_INCONVERTIBLE 2\n");
// m_assert(0, "str2float error: STR2XX_INCONVERTIBLE");
return STR2XX_INCONVERTIBLE;
}
*out = f;
return STR2XX_SUCCESS;
}
/**
* @brief Read last line of *filename, up to max_line_width characters.
* @note This function should be used carefully as it is not the most
* efficient one. But it is a quick-n-dirty way of reading the last line
* of a file.
* @param[in] filename File to be read.
* @param[in] max_line_width Integer indicating the max line width to be read.
* If a line is longer than that, it will be truncated. If zero or negative, a
* default value will be used instead.
* @return Pointer to a string holding the line that was read, or NULL if error.
*/
static inline char *read_last_line(const char *filename, int max_line_width){
uv_fs_t req;
int64_t start_pos, end_pos;
uv_file file_handle = -1;
uv_buf_t uvBuf;
char *buff = NULL;
int rc, line_pos = -1, bytes_read;
max_line_width = max_line_width > 0 ? max_line_width : 1024; // 1024 == default value
rc = uv_fs_stat(NULL, &req, filename, NULL);
end_pos = req.statbuf.st_size;
uv_fs_req_cleanup(&req);
if (unlikely(rc)) {
collector_error("[%s]: uv_fs_stat() error: (%d) %s", filename, rc, uv_strerror(rc));
m_assert(0, "uv_fs_stat() failed during read_last_line()");
goto error;
}
if(end_pos == 0) goto error;
start_pos = end_pos - max_line_width;
if(start_pos < 0) start_pos = 0;
rc = uv_fs_open(NULL, &req, filename, O_RDONLY, 0, NULL);
uv_fs_req_cleanup(&req);
if (unlikely(rc < 0)) {
collector_error("[%s]: uv_fs_open() error: (%d) %s",filename, rc, uv_strerror(rc));
m_assert(0, "uv_fs_open() failed during read_last_line()");
goto error;
}
file_handle = rc;
buff = callocz(1, (size_t) (end_pos - start_pos + 1) * sizeof(char));
uvBuf = uv_buf_init(buff, (unsigned int) (end_pos - start_pos));
rc = uv_fs_read(NULL, &req, file_handle, &uvBuf, 1, start_pos, NULL);
uv_fs_req_cleanup(&req);
if (unlikely(rc < 0)){
collector_error("[%s]: uv_fs_read() error: (%d) %s", filename, rc, uv_strerror(rc));
m_assert(0, "uv_fs_read() failed during read_last_line()");
goto error;
}
bytes_read = rc;
buff[bytes_read] = '\0';
for(int i = bytes_read - 2; i >= 0; i--){ // -2 because -1 could be '\n'
if (buff[i] == '\n'){
line_pos = i;
break;
}
}
if(line_pos >= 0){
char *line = callocz(1, (size_t) (bytes_read - line_pos) * sizeof(char));
memcpy(line, &buff[line_pos + 1], (size_t) (bytes_read - line_pos));
freez(buff);
uv_fs_close(NULL, &req, file_handle, NULL);
return line;
}
if(start_pos == 0){
uv_fs_close(NULL, &req, file_handle, NULL);
return buff;
}
error:
if(buff) freez(buff);
if(file_handle >= 0) uv_fs_close(NULL, &req, file_handle, NULL);
return NULL;
}
static inline void memcpy_iscntrl_fix(char *dest, char *src, size_t num){
while(num--){
*dest++ = unlikely(!iscntrl(*src)) ? *src : ' ';
src++;
}
}
#endif // HELPER_H_

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file logsmanag_config.h
* @brief Header of logsmanag_config.c
*/
#include "file_info.h"
#include "flb_plugin.h"
char *get_user_config_dir(void);
char *get_stock_config_dir(void);
char *get_log_dir(void);
char *get_cache_dir(void);
void p_file_info_destroy_all(void);
#define LOGS_MANAG_CONFIG_LOAD_ERROR_OK 0
#define LOGS_MANAG_CONFIG_LOAD_ERROR_NO_STOCK_CONFIG -1
#define LOGS_MANAG_CONFIG_LOAD_ERROR_P_FLB_SRVC_NULL -2
int logs_manag_config_load( flb_srvc_config_t *p_flb_srvc_config,
Flb_socket_config_t **forward_in_config_p,
int g_update_every);
void config_file_load( uv_loop_t *main_loop,
Flb_socket_config_t *p_forward_in_config,
flb_srvc_config_t *p_flb_srvc_config,
netdata_mutex_t *stdout_mut);

View File

@ -0,0 +1,253 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file logsmanagement.c
* @brief This is the main file of the Netdata logs management project
*
* The aim of the project is to add the capability to collect, parse and
* query logs in the Netdata agent. For more information please refer
* to the project's [README](README.md) file.
*/
#include <uv.h>
#include "daemon/common.h"
#include "db_api.h"
#include "file_info.h"
#include "flb_plugin.h"
#include "functions.h"
#include "helper.h"
#include "libnetdata/required_dummies.h"
#include "logsmanag_config.h"
#include "rrd_api/rrd_api_stats.h"
#if defined(ENABLE_LOGSMANAGEMENT_TESTS)
#include "logsmanagement/unit_test/unit_test.h"
#endif
#if defined(LOGS_MANAGEMENT_STRESS_TEST) && LOGS_MANAGEMENT_STRESS_TEST == 1
#include "query_test.h"
#endif // defined(LOGS_MANAGEMENT_STRESS_TEST)
netdata_mutex_t stdout_mut = NETDATA_MUTEX_INITIALIZER;
bool logsmanagement_should_exit = false;
struct File_infos_arr *p_file_infos_arr = NULL;
static uv_loop_t *main_loop;
static uv_thread_t stats_charts_thread_id;
static struct {
uv_signal_t sig;
const int signum;
} signals[] = {
// Add here signals that will terminate the plugin
{.signum = SIGINT},
{.signum = SIGQUIT},
{.signum = SIGPIPE},
{.signum = SIGTERM}
};
static void signal_handler(uv_signal_t *handle, int signum __maybe_unused) {
UNUSED(handle);
debug_log("Signal received: %d\n", signum);
__atomic_store_n(&logsmanagement_should_exit, true, __ATOMIC_RELAXED);
}
static void on_walk_cleanup(uv_handle_t* handle, void* data){
UNUSED(data);
if (!uv_is_closing(handle))
uv_close(handle, NULL);
}
/**
* @brief The main function of the logs management plugin.
* @details Any static asserts are most likely going to be inluded here. After
* any initialisation routines, the default uv_loop_t is executed indefinitely.
*/
int main(int argc, char **argv) {
/* Static asserts */
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-local-typedefs"
COMPILE_TIME_ASSERT(SAVE_BLOB_TO_DB_MIN <= SAVE_BLOB_TO_DB_MAX);
COMPILE_TIME_ASSERT(CIRCULAR_BUFF_DEFAULT_MAX_SIZE >= CIRCULAR_BUFF_MAX_SIZE_RANGE_MIN);
COMPILE_TIME_ASSERT(CIRCULAR_BUFF_DEFAULT_MAX_SIZE <= CIRCULAR_BUFF_MAX_SIZE_RANGE_MAX);
#pragma GCC diagnostic pop
clocks_init();
program_name = LOGS_MANAGEMENT_PLUGIN_STR;
nd_log_initialize_for_external_plugins(program_name);
// netdata_configured_host_prefix = getenv("NETDATA_HOST_PREFIX");
// if(verify_netdata_host_prefix() == -1) exit(1);
int g_update_every = 0;
for(int i = 1; i < argc ; i++) {
if(isdigit(*argv[i]) && !g_update_every && str2i(argv[i]) > 0 && str2i(argv[i]) < 86400) {
g_update_every = str2i(argv[i]);
debug_log("new update_every received: %d", g_update_every);
}
else if(!strcmp("--unittest", argv[i])) {
#if defined(ENABLE_LOGSMANAGEMENT_TESTS)
exit(logs_management_unittest());
#else
collector_error("%s was not built with unit test support.", program_name);
#endif
}
else if(!strcmp("version", argv[i]) ||
!strcmp("-version", argv[i]) ||
!strcmp("--version", argv[i]) ||
!strcmp("-v", argv[i]) ||
!strcmp("-V", argv[i])) {
printf(VERSION"\n");
exit(0);
}
else if(!strcmp("-h", argv[i]) ||
!strcmp("--help", argv[i])) {
fprintf(stderr,
"\n"
" netdata %s %s\n"
" Copyright (C) 2023 Netdata Inc.\n"
" Released under GNU General Public License v3 or later.\n"
" All rights reserved.\n"
"\n"
" This program is the logs management plugin for netdata.\n"
"\n"
" Available command line options:\n"
"\n"
" --unittest run unit tests and exit\n"
"\n"
" -v\n"
" -V\n"
" --version print version and exit\n"
"\n"
" -h\n"
" --help print this message and exit\n"
"\n"
" For more information:\n"
" https://github.com/netdata/netdata/tree/master/collectors/logs-management.plugin\n"
"\n",
program_name,
VERSION
);
exit(1);
}
else
collector_error("%s(): ignoring parameter '%s'", __FUNCTION__, argv[i]);
}
Flb_socket_config_t *p_forward_in_config = NULL;
main_loop = mallocz(sizeof(uv_loop_t));
fatal_assert(uv_loop_init(main_loop) == 0);
flb_srvc_config_t flb_srvc_config = {
.flush = FLB_FLUSH_DEFAULT,
.http_listen = FLB_HTTP_LISTEN_DEFAULT,
.http_port = FLB_HTTP_PORT_DEFAULT,
.http_server = FLB_HTTP_SERVER_DEFAULT,
.log_path = "NULL",
.log_level = FLB_LOG_LEVEL_DEFAULT,
.coro_stack_size = FLB_CORO_STACK_SIZE_DEFAULT
};
p_file_infos_arr = callocz(1, sizeof(struct File_infos_arr));
if(logs_manag_config_load(&flb_srvc_config, &p_forward_in_config, g_update_every))
exit(1);
if(flb_init(flb_srvc_config, get_stock_config_dir())){
collector_error("flb_init() failed - logs management will be disabled");
exit(1);
}
if(flb_add_fwd_input(p_forward_in_config))
collector_error("flb_add_fwd_input() failed - logs management forward input will be disabled");
/* Initialize logs management for each configuration section */
config_file_load(main_loop, p_forward_in_config, &flb_srvc_config, &stdout_mut);
if(p_file_infos_arr->count == 0){
collector_info("No valid configuration could be found for any log source - logs management will be disabled");
exit(1);
}
/* Run Fluent Bit engine
* NOTE: flb_run() ideally would be executed after db_init(), but in case of
* a db_init() failure, it is easier to call flb_stop_and_cleanup() rather
* than the other way round (i.e. cleaning up after db_init(), if flb_run()
* fails). */
if(flb_run()){
collector_error("flb_run() failed - logs management will be disabled");
exit(1);
}
if(db_init()){
collector_error("db_init() failed - logs management will be disabled");
exit(1);
}
fatal_assert(0 == uv_thread_create(&stats_charts_thread_id, stats_charts_init, &stdout_mut));
#if defined(__STDC_VERSION__)
debug_log( "__STDC_VERSION__: %ld", __STDC_VERSION__);
#else
debug_log( "__STDC_VERSION__ undefined");
#endif // defined(__STDC_VERSION__)
debug_log( "libuv version: %s", uv_version_string());
debug_log( "LZ4 version: %s", LZ4_versionString());
debug_log( "SQLITE version: " SQLITE_VERSION);
#if defined(LOGS_MANAGEMENT_STRESS_TEST) && LOGS_MANAGEMENT_STRESS_TEST == 1
debug_log( "Running Netdata with logs_management stress test enabled!");
static uv_thread_t run_stress_test_queries_thread_id;
uv_thread_create(&run_stress_test_queries_thread_id, run_stress_test_queries_thread, NULL);
#endif // LOGS_MANAGEMENT_STRESS_TEST
for(int i = 0; i < (int) (sizeof(signals) / sizeof(signals[0])); i++){
uv_signal_init(main_loop, &signals[i].sig);
uv_signal_start(&signals[i].sig, signal_handler, signals[i].signum);
}
struct functions_evloop_globals *wg = logsmanagement_func_facets_init(&logsmanagement_should_exit);
collector_info("%s setup completed successfully", program_name);
/* Run uvlib loop. */
while(!__atomic_load_n(&logsmanagement_should_exit, __ATOMIC_RELAXED))
uv_run(main_loop, UV_RUN_ONCE);
/* If there are valid log sources, there should always be valid handles */
collector_info("uv_run(main_loop, ...); no handles or requests - cleaning up...");
nd_log_limits_unlimited();
// TODO: Clean up stats charts memory
uv_thread_join(&stats_charts_thread_id);
uv_stop(main_loop);
flb_terminate();
flb_free_fwd_input_out_cb();
p_file_info_destroy_all();
uv_walk(main_loop, on_walk_cleanup, NULL);
while(0 != uv_run(main_loop, UV_RUN_ONCE));
if(uv_loop_close(main_loop))
m_assert(0, "uv_loop_close() result not 0");
freez(main_loop);
functions_evloop_cancel_threads(wg);
collector_info("logs management clean up done - exiting");
exit(0);
}

1500
logsmanagement/parser.c Normal file

File diff suppressed because it is too large Load Diff

436
logsmanagement/parser.h Normal file
View File

@ -0,0 +1,436 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file parser.h
* @brief Header of parser.c
*/
#ifndef PARSER_H_
#define PARSER_H_
#include <regex.h>
#include "daemon/common.h"
#include "libnetdata/libnetdata.h"
// Forward decleration
typedef struct log_parser_metrics Log_parser_metrics_t;
/* -------------------------------------------------------------------------- */
/* Configuration-related */
/* -------------------------------------------------------------------------- */
typedef enum{
CHART_COLLECTED_LOGS_TOTAL = 1 << 0,
CHART_COLLECTED_LOGS_RATE = 1 << 1,
/* FLB_WEB_LOG charts */
CHART_VHOST = 1 << 2,
CHART_PORT = 1 << 3,
CHART_IP_VERSION = 1 << 4,
CHART_REQ_CLIENT_CURRENT = 1 << 5,
CHART_REQ_CLIENT_ALL_TIME = 1 << 6,
CHART_REQ_METHODS = 1 << 7,
CHART_REQ_PROTO = 1 << 8,
CHART_BANDWIDTH = 1 << 9,
CHART_REQ_PROC_TIME = 1 << 10,
CHART_RESP_CODE_FAMILY = 1 << 11,
CHART_RESP_CODE = 1 << 12,
CHART_RESP_CODE_TYPE = 1 << 13,
CHART_SSL_PROTO = 1 << 14,
CHART_SSL_CIPHER = 1 << 15,
/* FLB_SYSTEMD or FLB_SYSLOG charts */
CHART_SYSLOG_PRIOR = 1 << 16,
CHART_SYSLOG_SEVER = 1 << 17,
CHART_SYSLOG_FACIL = 1 << 18,
/* FLB_KMSG charts */
CHART_KMSG_SUBSYSTEM = 1 << 19,
CHART_KMSG_DEVICE = 1 << 20,
/* FLB_DOCKER_EV charts */
CHART_DOCKER_EV_TYPE = 1 << 21,
CHART_DOCKER_EV_ACTION = 1 << 22,
/* FLB_MQTT charts*/
CHART_MQTT_TOPIC = 1 << 23
} chart_type_t;
typedef struct log_parser_config{
void *gen_config; /**< Pointer to (optional) generic configuration, as per use case. */
unsigned long int chart_config; /**< Configuration of which charts to enable according to chart_type_t **/
} Log_parser_config_t;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Web Log parsing and metrics */
/* -------------------------------------------------------------------------- */
#define VHOST_MAX_LEN 255 /**< Max vhost string length, inclding terminating \0 **/
#define PORT_MAX_LEN 6 /**< Max port string length, inclding terminating \0 **/
#define REQ_SCHEME_MAX_LEN 6 /**< Max request scheme length, including terminating \0 **/
#define REQ_CLIENT_MAX_LEN 46 /**< https://superuser.com/questions/381022/how-many-characters-can-an-ip-address-be#comment2219013_381029 **/
#define REQ_METHOD_MAX_LEN 18 /**< Max request method length, including terminating \0 **/
#define REQ_URL_MAX_LEN 128 /**< Max request URL length, including terminating \0 **/
#define REQ_PROTO_PREF_SIZE (sizeof("HTTP/") - 1)
#define REQ_PROTO_MAX_LEN 4 /**< Max request protocol numerical part length, including terminating \0 **/
#define REQ_SIZE_MAX_LEN 11 /**< Max size of bytes received, including terminating \0 **/
#define REQ_PROC_TIME_MAX_LEN 11 /**< Max size of request processing time, including terminating \0 **/
#define REQ_RESP_CODE_MAX_LEN 4 /**< Max size of response code, including terminating \0 **/
#define REQ_RESP_SIZE_MAX_LEN 11 /**< Max size of request response size, including terminating \0 **/
#define UPS_RESP_TIME_MAX_LEN 10 /**< Max size of upstream response time, including terminating \0 **/
#define SSL_PROTO_MAX_LEN 8 /**< Max SSL protocol length, inclding terminating \0 **/
#define SSL_CIPHER_SUITE_MAX_LEN 256 /**< TODO: Check max len for ssl cipher suite string is indeed 256 **/
#define RESP_CODE_ARR_SIZE 501 /**< Size of resp_code array, assuming 500 valid resp codes + 1 for "other" **/
#define WEB_LOG_INVALID_HOST_STR "invalid"
#define WEB_LOG_INVALID_PORT -1
#define WEB_LOG_INVALID_PORT_STR "inv"
#define WEB_LOG_INVALID_CLIENT_IP_STR WEB_LOG_INVALID_PORT_STR
/* Web log configuration */
#define ENABLE_PARSE_WEB_LOG_LINE_DEBUG 0
#define VHOST_BUFFS_SCALE_FACTOR 1.5
#define PORT_BUFFS_SCALE_FACTOR 8 // Unlike Vhosts, ports are stored as integers, so scale factor can be bigger
typedef enum{
VHOST_WITH_PORT, // nginx: $host:$server_port apache: %v:%p
VHOST, // nginx: $host ($http_host) apache: %v
PORT, // nginx: $server_port apache: %p
REQ_SCHEME, // nginx: $scheme apache: -
REQ_CLIENT, // nginx: $remote_addr apache: %a (%h)
REQ, // nginx: $request apache: %r
REQ_METHOD, // nginx: $request_method apache: %m
REQ_URL, // nginx: $request_uri apache: %U
REQ_PROTO, // nginx: $server_protocol apache: %H
REQ_SIZE, // nginx: $request_length apache: %I
REQ_PROC_TIME, // nginx: $request_time apache: %D
RESP_CODE, // nginx: $status apache: %s, %>s
RESP_SIZE, // nginx: $bytes_sent, $body_bytes_sent apache: %b, %O, %B // TODO: Should separate %b from %O ?
UPS_RESP_TIME, // nginx: $upstream_response_time apache: -
SSL_PROTO, // nginx: $ssl_protocol apache: -
SSL_CIPHER_SUITE, // nginx: $ssl_cipher apache: -
TIME, // nginx: $time_local apache: %t
CUSTOM
} web_log_line_field_t;
typedef struct web_log_parser_config{
web_log_line_field_t *fields;
int num_fields; /**< Number of strings in the fields array. **/
char delimiter; /**< Delimiter that separates the fields in the log format. **/
int verify_parsed_logs; /**< Boolean whether to try and verify parsed log fields or not **/
int skip_timestamp_parsing; /**< Boolean whether to skip parsing of timestamp fields **/
} Web_log_parser_config_t;
static const char *const req_method_str[] = {
"ACL",
"BASELINE-CONTROL",
"BIND",
"CHECKIN",
"CHECKOUT",
"CONNECT",
"COPY",
"DELETE",
"GET",
"HEAD",
"LABEL",
"LINK",
"LOCK",
"MERGE",
"MKACTIVITY",
"MKCALENDAR",
"MKCOL",
"MKREDIRECTREF",
"MKWORKSPACE",
"MOVE",
"OPTIONS",
"ORDERPATCH",
"PATCH",
"POST",
"PRI",
"PROPFIND",
"PROPPATCH",
"PUT",
"REBIND",
"REPORT",
"SEARCH",
"TRACE",
"UNBIND",
"UNCHECKOUT",
"UNLINK",
"UNLOCK",
"UPDATE",
"UPDATEREDIRECTREF",
"-"
};
#define REQ_METHOD_ARR_SIZE (int)(sizeof(req_method_str) / sizeof(req_method_str[0]))
typedef struct web_log_metrics{
/* Web log metrics */
struct log_parser_metrics_vhosts_array{
struct log_parser_metrics_vhost{
char name[VHOST_MAX_LEN]; /**< Name of the vhost **/
int count; /**< Occurences of the vhost **/
} *vhosts;
int size; /**< Size of vhosts array **/
int size_max;
} vhost_arr;
struct log_parser_metrics_ports_array{
struct log_parser_metrics_port{
char name[PORT_MAX_LEN]; /**< Number of port in str */
int port; /**< Number of port **/
int count; /**< Occurences of the port **/
} *ports;
int size; /**< Size of ports array **/
int size_max;
} port_arr;
struct log_parser_metrics_ip_ver{
int v4, v6, invalid;
} ip_ver;
/**< req_clients_current_arr is used by parser.c to save unique client IPs
* extracted per circular buffer item and also in p_file_info to save unique
* client IPs per collection (poll) iteration of plugin_logsmanagement.c.
* req_clients_alltime_arr is used in p_file_info to save unique client IPs
* of all time (and so ipv4_size and ipv6_size can only grow and are never reset to 0). **/
struct log_parser_metrics_req_clients_array{
char (*ipv4_req_clients)[REQ_CLIENT_MAX_LEN];
int ipv4_size;
int ipv4_size_max;
char (*ipv6_req_clients)[REQ_CLIENT_MAX_LEN];
int ipv6_size;
int ipv6_size_max;
} req_clients_current_arr, req_clients_alltime_arr;
int req_method[REQ_METHOD_ARR_SIZE];
struct log_parser_metrics_req_proto{
int http_1, http_1_1, http_2, other;
} req_proto;
struct log_parser_metrics_bandwidth{
long long req_size, resp_size;
} bandwidth;
struct log_parser_metrics_req_proc_time{
int min, max, sum, count;
} req_proc_time;
struct log_parser_metrics_resp_code_family{
int resp_1xx, resp_2xx, resp_3xx, resp_4xx, resp_5xx, other; // TODO: Can there be "other"?
} resp_code_family;
/**< Array counting occurences of response codes. Each item represents the
* respective response code by adding 100 to its index, e.g. resp_code[102]
* counts how many 202 codes were detected. 501st item represents "other" */
unsigned int resp_code[RESP_CODE_ARR_SIZE];
struct log_parser_metrics_resp_code_type{ /* Note: 304 and 401 should be treated as resp_success */
int resp_success, resp_redirect, resp_bad, resp_error, other; // TODO: Can there be "other"?
} resp_code_type;
struct log_parser_metrics_ssl_proto{
int tlsv1, tlsv1_1, tlsv1_2, tlsv1_3, sslv2, sslv3, other;
} ssl_proto;
struct log_parser_metrics_ssl_cipher_array{
struct log_parser_metrics_ssl_cipher{
char name[SSL_CIPHER_SUITE_MAX_LEN]; /**< SSL cipher suite string **/
int count; /**< Occurences of the SSL cipher **/
} *ssl_ciphers;
int size; /**< Size of SSL ciphers array **/
} ssl_cipher_arr;
int64_t timestamp;
} Web_log_metrics_t;
typedef struct log_line_parsed{
char vhost[VHOST_MAX_LEN];
int port;
char req_scheme[REQ_SCHEME_MAX_LEN];
char req_client[REQ_CLIENT_MAX_LEN];
char req_method[REQ_METHOD_MAX_LEN];
char req_URL[REQ_URL_MAX_LEN];
char req_proto[REQ_PROTO_MAX_LEN];
int req_size;
int req_proc_time;
int resp_code;
int resp_size;
int ups_resp_time;
char ssl_proto[SSL_PROTO_MAX_LEN];
char ssl_cipher[SSL_CIPHER_SUITE_MAX_LEN];
int64_t timestamp;
int parsing_errors;
} Log_line_parsed_t;
Web_log_parser_config_t *read_web_log_parser_config(const char *log_format, const char delimiter);
#ifdef ENABLE_LOGSMANAGEMENT_TESTS
/* Used as public only for unit testing, normally defined as static */
int count_fields(const char *line, const char delimiter);
#endif // ENABLE_LOGSMANAGEMENT_TESTS
void parse_web_log_line(const Web_log_parser_config_t *wblp_config,
char *line, const size_t line_len,
Log_line_parsed_t *log_line_parsed);
void extract_web_log_metrics(Log_parser_config_t *parser_config,
Log_line_parsed_t *line_parsed,
Web_log_metrics_t *metrics);
Web_log_parser_config_t *auto_detect_web_log_parser_config(char *line, const char delimiter);
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Kernel logs (kmsg) metrics */
/* -------------------------------------------------------------------------- */
#define SYSLOG_SEVER_ARR_SIZE 9 /**< Number of severity levels plus 1 for 'unknown' **/
typedef struct metrics_dict_item{
bool dim_initialized;
int num;
int num_new;
} metrics_dict_item_t;
typedef struct kernel_metrics{
unsigned int sever[SYSLOG_SEVER_ARR_SIZE]; /**< Syslog severity, 0-7 plus 1 space for 'unknown' **/
DICTIONARY *subsystem;
DICTIONARY *device;
} Kernel_metrics_t;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Systemd and Syslog metrics */
/* -------------------------------------------------------------------------- */
#define SYSLOG_FACIL_ARR_SIZE 25 /**< Number of facility levels plus 1 for 'unknown' **/
#define SYSLOG_PRIOR_ARR_SIZE 193 /**< Number of priority values plus 1 for 'unknown' **/
typedef struct systemd_metrics{
unsigned int sever[SYSLOG_SEVER_ARR_SIZE]; /**< Syslog severity, 0-7 plus 1 space for 'unknown' **/
unsigned int facil[SYSLOG_FACIL_ARR_SIZE]; /**< Syslog facility, 0-23 plus 1 space for 'unknown' **/
unsigned int prior[SYSLOG_PRIOR_ARR_SIZE]; /**< Syslog priority value, 0-191 plus 1 space for 'unknown' **/
} Systemd_metrics_t;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Docker Events metrics */
/* -------------------------------------------------------------------------- */
static const char *const docker_ev_type_string[] = {
"container", "image", "plugin", "volume", "network", "daemon", "service", "node", "secret", "config", "unknown"
};
#define NUM_OF_DOCKER_EV_TYPES ((int) (sizeof docker_ev_type_string / sizeof docker_ev_type_string[0]))
#define NUM_OF_CONTAINER_ACTIONS 25 /**< == size of 'Containers actions' array, largest array in docker_ev_action_string **/
static const char *const docker_ev_action_string[NUM_OF_DOCKER_EV_TYPES][NUM_OF_CONTAINER_ACTIONS] = {
/* Order of arrays is important, it must match the order of docker_ev_type_string[] strings. */
/* Containers actions */
{"attach", "commit", "copy", "create", "destroy", "detach", "die", "exec_create", "exec_detach", "exec_die",
"exec_start", "export", "health_status", "kill", "oom", "pause", "rename", "resize", "restart", "start", "stop",
"top", "unpause", "update", NULL},
/* Images actions */
{"delete", "import", "load", "pull", "push", "save", "tag", "untag", NULL},
/* Plugins actions */
{"enable", "disable", "install", "remove", NULL},
/* Volumes actions */
{"create", "destroy", "mount", "unmount", NULL},
/* Networks actions */
{"create", "connect", "destroy", "disconnect", "remove", NULL},
/* Daemons actions */
{"reload", NULL},
/* Services actions */
{"create", "remove", "update", NULL},
/* Nodes actions */
{"create", "remove", "update", NULL},
/* Secrets actions */
{"create", "remove", "update", NULL},
/* Configs actions */
{"create", "remove", "update", NULL},
{"unknown", NULL}
};
typedef struct docker_ev_metrics{
unsigned int ev_type[NUM_OF_DOCKER_EV_TYPES];
unsigned int ev_action[NUM_OF_DOCKER_EV_TYPES][NUM_OF_CONTAINER_ACTIONS];
} Docker_ev_metrics_t;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* MQTT metrics */
/* -------------------------------------------------------------------------- */
typedef struct mqtt_metrics{
DICTIONARY *topic;
} Mqtt_metrics_t;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Regex / Keyword search */
/* -------------------------------------------------------------------------- */
#define MAX_KEYWORD_LEN 100 /**< Max size of keyword used in keyword search, in bytes */
#define MAX_REGEX_SIZE MAX_KEYWORD_LEN + 7 /**< Max size of regular expression (used in keyword search) in bytes **/
int search_keyword( char *src, size_t src_sz,
char *dest, size_t *dest_sz,
const char *keyword, regex_t *regex,
const int ignore_case);
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* Custom Charts configuration and metrics */
/* -------------------------------------------------------------------------- */
typedef struct log_parser_cus_config{
char *chartname; /**< Chart name where the regex metrics will appear in **/
char *regex_str; /**< String representation of the regex **/
char *regex_name; /**< If regex is named, this is where its name is stored **/
regex_t regex; /**< The compiled regex **/
} Log_parser_cus_config_t;
typedef struct log_parser_cus_metrics{
unsigned long long count;
} Log_parser_cus_metrics_t;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* General / Other */
/* -------------------------------------------------------------------------- */
struct log_parser_metrics{
unsigned long long num_lines;
// struct timeval tv;
time_t last_update;
union {
Web_log_metrics_t *web_log;
Kernel_metrics_t *kernel;
Systemd_metrics_t *systemd;
Docker_ev_metrics_t *docker_ev;
Mqtt_metrics_t *mqtt;
};
Log_parser_cus_metrics_t **parser_cus; /**< Array storing custom chart metrics structs **/
} ;
#endif // PARSER_H_

221
logsmanagement/query.c Normal file
View File

@ -0,0 +1,221 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file query.c
*
* @brief This is the file containing the implementation of the
* logs management querying API.
*/
#define _GNU_SOURCE
#include "query.h"
#include <uv.h>
#include <sys/resource.h>
#include "circular_buffer.h"
#include "db_api.h"
#include "file_info.h"
#include "helper.h"
static const char esc_ch[] = "[]\\^$.|?*+(){}";
/**
* @brief Sanitise string to work with regular expressions
* @param[in] s Input string to be sanitised - will not be modified
* @return Sanitised string (escaped characters according to esc_ch[] array)
*/
UNIT_STATIC char *sanitise_string(char *const s){
size_t s_len = strlen(s);
/* Truncate keyword if longer than maximum allowed length */
if(unlikely(s_len > MAX_KEYWORD_LEN)){
s_len = MAX_KEYWORD_LEN;
s[s_len] = '\0';
}
char *s_san = mallocz(s_len * 2);
char *s_off = s;
char *s_san_off = s_san;
while(*s_off) {
for(char *esc_ch_off = (char *) esc_ch; *esc_ch_off; esc_ch_off++){
if(*s_off == *esc_ch_off){
*s_san_off++ = '\\';
break;
}
}
*s_san_off++ = *s_off++;
}
*s_san_off = '\0';
return s_san;
}
const logs_qry_res_err_t *fetch_log_sources(BUFFER *wb){
if(unlikely(!p_file_infos_arr || !p_file_infos_arr->count))
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_SERVER_ERR];
buffer_json_add_array_item_object(wb);
buffer_json_member_add_string(wb, "id", "all");
buffer_json_member_add_string(wb, "name", "all");
buffer_json_member_add_string(wb, "pill", "100"); // TODO
buffer_json_member_add_string(wb, "info", "All log sources");
buffer_json_member_add_string(wb, "basename", "");
buffer_json_member_add_string(wb, "filename", "");
buffer_json_member_add_string(wb, "log_type", "");
buffer_json_member_add_string(wb, "db_dir", "");
buffer_json_member_add_uint64(wb, "db_version", 0);
buffer_json_member_add_uint64(wb, "db_flush_freq", 0);
buffer_json_member_add_int64( wb, "db_disk_space_limit", 0);
buffer_json_object_close(wb); // options object
bool queryable_sources = false;
for (int i = 0; i < p_file_infos_arr->count; i++) {
if(p_file_infos_arr->data[i]->db_mode == LOGS_MANAG_DB_MODE_FULL)
queryable_sources = true;
}
if(!queryable_sources)
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_NOT_FOUND_ERR];
for (int i = 0; i < p_file_infos_arr->count; i++) {
buffer_json_add_array_item_object(wb);
buffer_json_member_add_string(wb, "id", p_file_infos_arr->data[i]->chartname);
buffer_json_member_add_string(wb, "name", p_file_infos_arr->data[i]->chartname);
buffer_json_member_add_string(wb, "pill", "100"); // TODO
char info[1024];
snprintfz(info, sizeof(info), "Chart '%s' from log source '%s'",
p_file_infos_arr->data[i]->chartname,
p_file_infos_arr->data[i]->file_basename);
buffer_json_member_add_string(wb, "info", info);
buffer_json_member_add_string(wb, "basename", p_file_infos_arr->data[i]->file_basename);
buffer_json_member_add_string(wb, "filename", p_file_infos_arr->data[i]->filename);
buffer_json_member_add_string(wb, "log_type", log_src_type_t_str[p_file_infos_arr->data[i]->log_type]);
buffer_json_member_add_string(wb, "db_dir", p_file_infos_arr->data[i]->db_dir);
buffer_json_member_add_uint64(wb, "db_version", db_user_version(p_file_infos_arr->data[i]->db, -1));
buffer_json_member_add_uint64(wb, "db_flush_freq", db_user_version(p_file_infos_arr->data[i]->db, -1));
buffer_json_member_add_int64( wb, "db_disk_space_limit", p_file_infos_arr->data[i]->blob_max_size * BLOB_MAX_FILES);
buffer_json_object_close(wb); // options object
}
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_OK];
}
const logs_qry_res_err_t *execute_logs_manag_query(logs_query_params_t *p_query_params) {
struct File_info *p_file_infos[LOGS_MANAG_MAX_COMPOUND_QUERY_SOURCES] = {NULL};
/* Check all required query parameters are present */
if(unlikely(!p_query_params->req_from_ts || !p_query_params->req_to_ts))
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_INV_TS_ERR];
/* Start with maximum possible actual timestamp range and reduce it
* accordingly when searching DB and circular buffer. */
p_query_params->act_from_ts = p_query_params->req_from_ts;
p_query_params->act_to_ts = p_query_params->req_to_ts;
if(p_file_infos_arr == NULL)
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_NOT_INIT_ERR];
/* Find p_file_infos for this query according to chartnames or filenames
* if the former is not valid. Only one of the two will be used,
* charts_names and filenames cannot be mixed.
* If neither list is provided, search all available log sources. */
if(p_query_params->chartname[0]){
int pfi_off = 0;
for(int cn_off = 0; p_query_params->chartname[cn_off]; cn_off++) {
for(int pfi_arr_off = 0; pfi_arr_off < p_file_infos_arr->count; pfi_arr_off++) {
if( !strcmp(p_file_infos_arr->data[pfi_arr_off]->chartname, p_query_params->chartname[cn_off]) &&
p_file_infos_arr->data[pfi_arr_off]->db_mode != LOGS_MANAG_DB_MODE_NONE) {
p_file_infos[pfi_off++] = p_file_infos_arr->data[pfi_arr_off];
break;
}
}
}
}
else if(p_query_params->filename[0]){
int pfi_off = 0;
for(int fn_off = 0; p_query_params->filename[fn_off]; fn_off++) {
for(int pfi_arr_off = 0; pfi_arr_off < p_file_infos_arr->count; pfi_arr_off++) {
if( !strcmp(p_file_infos_arr->data[pfi_arr_off]->filename, p_query_params->filename[fn_off]) &&
p_file_infos_arr->data[pfi_arr_off]->db_mode != LOGS_MANAG_DB_MODE_NONE) {
p_file_infos[pfi_off++] = p_file_infos_arr->data[pfi_arr_off];
break;
}
}
}
}
else{
int pfi_off = 0;
for(int pfi_arr_off = 0; pfi_arr_off < p_file_infos_arr->count; pfi_arr_off++) {
if(p_file_infos_arr->data[pfi_arr_off]->db_mode != LOGS_MANAG_DB_MODE_NONE)
p_file_infos[pfi_off++] = p_file_infos_arr->data[pfi_arr_off];
}
}
if(unlikely(!p_file_infos[0]))
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_NOT_FOUND_ERR];
if( p_query_params->sanitize_keyword && p_query_params->keyword &&
*p_query_params->keyword && strcmp(p_query_params->keyword, " ")){
p_query_params->keyword = sanitise_string(p_query_params->keyword); // freez(p_query_params->keyword) in this case
}
if(p_query_params->stop_monotonic_ut == 0)
p_query_params->stop_monotonic_ut = now_monotonic_usec() + (LOGS_MANAG_QUERY_TIMEOUT_DEFAULT - 1) * USEC_PER_SEC;
struct rusage ru_start, ru_end;
getrusage(RUSAGE_THREAD, &ru_start);
/* Secure DB lock to ensure no data will be transferred from the buffers to
* the DB during the query execution and also no other execute_logs_manag_query
* will try to access the DB at the same time. The operations happen
* atomically and the DB searches in series. */
for(int pfi_off = 0; p_file_infos[pfi_off]; pfi_off++)
uv_mutex_lock(p_file_infos[pfi_off]->db_mut);
/* If results are requested in ascending timestamp order, search DB(s) first
* and then the circular buffers. Otherwise, search the circular buffers
* first and the DB(s) second. In both cases, the quota must be respected. */
if(p_query_params->order_by_asc)
db_search(p_query_params, p_file_infos);
if( p_query_params->results_buff->len < p_query_params->quota &&
now_monotonic_usec() <= p_query_params->stop_monotonic_ut)
circ_buff_search(p_query_params, p_file_infos);
if(!p_query_params->order_by_asc &&
p_query_params->results_buff->len < p_query_params->quota &&
now_monotonic_usec() <= p_query_params->stop_monotonic_ut)
db_search(p_query_params, p_file_infos);
for(int pfi_off = 0; p_file_infos[pfi_off]; pfi_off++)
uv_mutex_unlock(p_file_infos[pfi_off]->db_mut);
getrusage(RUSAGE_THREAD, &ru_end);
__atomic_add_fetch(&p_file_infos[0]->cpu_time_per_mib.user,
p_query_params->results_buff->len ? ( ru_end.ru_utime.tv_sec * USEC_PER_SEC -
ru_start.ru_utime.tv_sec * USEC_PER_SEC +
ru_end.ru_utime.tv_usec -
ru_start.ru_utime.tv_usec ) * (1 MiB) / p_query_params->results_buff->len : 0
, __ATOMIC_RELAXED);
__atomic_add_fetch(&p_file_infos[0]->cpu_time_per_mib.sys,
p_query_params->results_buff->len ? ( ru_end.ru_stime.tv_sec * USEC_PER_SEC -
ru_start.ru_stime.tv_sec * USEC_PER_SEC +
ru_end.ru_stime.tv_usec -
ru_start.ru_stime.tv_usec ) * (1 MiB) / p_query_params->results_buff->len : 0
, __ATOMIC_RELAXED);
/* If keyword has been sanitised, it needs to be freed - otherwise it's just a pointer to a substring */
if(p_query_params->sanitize_keyword && p_query_params->keyword){
freez(p_query_params->keyword);
}
if(!p_query_params->results_buff->len)
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_NOT_FOUND_ERR];
return &logs_qry_res_err[LOGS_QRY_RES_ERR_CODE_OK];
}

146
logsmanagement/query.h Normal file
View File

@ -0,0 +1,146 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file query.h
* @brief Header of query.c
*/
#ifndef QUERY_H_
#define QUERY_H_
#include <inttypes.h>
#include <stdlib.h>
#include "libnetdata/libnetdata.h"
#include "defaults.h"
#define LOGS_QRY_VERSION "1"
#define LOGS_MANAG_FUNC_PARAM_AFTER "after"
#define LOGS_MANAG_FUNC_PARAM_BEFORE "before"
#define LOGS_QRY_KW_QUOTA "quota"
#define LOGS_QRY_KW_CHARTNAME "chartname"
#define LOGS_QRY_KW_FILENAME "filename"
#define LOGS_QRY_KW_KEYWORD "keyword"
#define LOGS_QRY_KW_IGNORE_CASE "ignore_case"
#define LOGS_QRY_KW_SANITIZE_KW "sanitize_keyword"
typedef struct {
const enum {LOGS_QRY_RES_ERR_CODE_OK = 0,
LOGS_QRY_RES_ERR_CODE_INV_TS_ERR,
LOGS_QRY_RES_ERR_CODE_NOT_FOUND_ERR,
LOGS_QRY_RES_ERR_CODE_NOT_INIT_ERR,
LOGS_QRY_RES_ERR_CODE_SERVER_ERR,
LOGS_QRY_RES_ERR_CODE_UNMODIFIED } err_code;
char const *const err_str;
const int http_code;
} logs_qry_res_err_t;
static const logs_qry_res_err_t logs_qry_res_err[] = {
{ LOGS_QRY_RES_ERR_CODE_OK, "success", HTTP_RESP_OK },
{ LOGS_QRY_RES_ERR_CODE_INV_TS_ERR, "invalid timestamp range", HTTP_RESP_BAD_REQUEST },
{ LOGS_QRY_RES_ERR_CODE_NOT_FOUND_ERR, "no results found", HTTP_RESP_OK },
{ LOGS_QRY_RES_ERR_CODE_NOT_INIT_ERR, "logs management engine not running", HTTP_RESP_SERVICE_UNAVAILABLE },
{ LOGS_QRY_RES_ERR_CODE_SERVER_ERR, "server error", HTTP_RESP_INTERNAL_SERVER_ERROR },
{ LOGS_QRY_RES_ERR_CODE_UNMODIFIED, "not modified", HTTP_RESP_NOT_MODIFIED }
};
const logs_qry_res_err_t *fetch_log_sources(BUFFER *wb);
/**
* @brief Parameters of the query.
* @param req_from_ts Requested start timestamp of query in epoch
* milliseconds.
*
* @param req_to_ts Requested end timestamp of query in epoch milliseconds.
* If it doesn't match the requested start timestamp, there may be more results
* to be retrieved (for descending timestamp order queries).
*
* @param act_from_ts Actual start timestamp of query in epoch milliseconds.
*
* @param act_to_ts Actual end timestamp of query in epoch milliseconds.
* If it doesn't match the requested end timestamp, there may be more results to
* be retrieved (for ascending timestamp order queries).
*
* @param order_by_asc Equal to 1 if req_from_ts <= req_to_ts, otherwise 0.
*
* @param quota Request quota for results. When exceeded, query will
* return, even if there are more pending results.
*
* @param stop_monotonic_ut Monotonic time in usec after which the query
* will be timed out.
*
* @param chartname Chart name of log source to be queried, as it appears
* on the netdata dashboard. If this is defined and not an empty string, the
* filename parameter is ignored.
*
* @param filename Full path of log source to be queried. Will only be used
* if the chartname is not used.
*
* @param keyword The keyword to be searched. IMPORTANT! Regular expressions
* are supported (if sanitize_keyword is not set) but have not been tested
* extensively, so use with caution!
*
* @param ignore_case If set to any integer other than 0, the query will be
* case-insensitive. If not set or if set to 0, the query will be case-sensitive
*
* @param sanitize_keyword If set to any integer other than 0, the keyword
* will be sanitized before used by the regex engine (which means the keyword
* cannot be a regular expression, as it will be taken as a literal input).
*
* @param results_buff Buffer of BUFFER type to store the results of the
* query in.
*
* @param results_buff->size Defines the maximum quota of results to be
* expected. If exceeded, the query will return the results obtained so far.
*
* @param results_buff->len The exact size of the results matched.
*
* @param results_buff->buffer String containing the results of the query.
*
* @param num_lines Number of log records that match the keyword.
*
* @warning results_buff->size argument must be <= MAX_LOG_MSG_SIZE.
*/
typedef struct logs_query_params {
msec_t req_from_ts;
msec_t req_to_ts;
msec_t act_from_ts;
msec_t act_to_ts;
int order_by_asc;
unsigned long quota;
usec_t stop_monotonic_ut;
char *chartname[LOGS_MANAG_MAX_COMPOUND_QUERY_SOURCES];
char *filename[LOGS_MANAG_MAX_COMPOUND_QUERY_SOURCES];
char *keyword;
int ignore_case;
int sanitize_keyword;
BUFFER *results_buff;
unsigned long num_lines;
} logs_query_params_t;
typedef struct logs_query_res_hdr {
msec_t timestamp;
size_t text_size;
int matches;
char log_source[20];
char log_type[20];
char basename[20];
char filename[50];
char chartname[20];
} logs_query_res_hdr_t;
/**
* @brief Primary query API.
* @param p_query_params See documentation of logs_query_params_t struct on how
* to use argument.
* @return enum of LOGS_QRY_RES_ERR_CODE with result of query
* @todo Cornercase if filename not found in DB? Return specific message?
*/
const logs_qry_res_err_t *execute_logs_manag_query(logs_query_params_t *p_query_params);
#ifdef ENABLE_LOGSMANAGEMENT_TESTS
/* Used as public only for unit testing, normally defined as static */
char *sanitise_string(char *s);
#endif // ENABLE_LOGSMANAGEMENT_TESTS
#endif // QUERY_H_

View File

@ -0,0 +1,312 @@
/** @file rrd_api.h
*/
#ifndef RRD_API_H_
#define RRD_API_H_
#include "daemon/common.h"
#include "../circular_buffer.h"
#include "../helper.h"
struct Chart_meta;
struct Chart_str {
const char *type;
const char *id;
const char *title;
const char *units;
const char *family;
const char *context;
const char *chart_type;
long priority;
int update_every;
};
#include "rrd_api_generic.h"
#include "rrd_api_web_log.h"
#include "rrd_api_kernel.h"
#include "rrd_api_systemd.h"
#include "rrd_api_docker_ev.h"
#include "rrd_api_mqtt.h"
#define CHART_TITLE_TOTAL_COLLECTED_LOGS "Total collected log records"
#define CHART_TITLE_RATE_COLLECTED_LOGS "Rate of collected log records"
#define NETDATA_CHART_PRIO_LOGS_INCR 100 /**< PRIO increment step from one log source to another **/
typedef struct Chart_data_cus {
char *id;
struct chart_data_cus_dim {
char *name;
collected_number val;
unsigned long long *p_counter;
} *dims;
int dims_size;
struct Chart_data_cus *next;
} Chart_data_cus_t ;
struct Chart_meta {
enum log_src_type_t type;
long base_prio;
union {
chart_data_generic_t *chart_data_generic;
chart_data_web_log_t *chart_data_web_log;
chart_data_kernel_t *chart_data_kernel;
chart_data_systemd_t *chart_data_systemd;
chart_data_docker_ev_t *chart_data_docker_ev;
chart_data_mqtt_t *chart_data_mqtt;
};
Chart_data_cus_t *chart_data_cus_arr;
void (*init)(struct File_info *p_file_info);
void (*update)(struct File_info *p_file_info);
};
static inline struct Chart_str lgs_mng_create_chart(const char *type,
const char *id,
const char *title,
const char *units,
const char *family,
const char *context,
const char *chart_type,
long priority,
int update_every){
struct Chart_str cs = {
.type = type,
.id = id,
.title = title,
.units = units,
.family = family ? family : "",
.context = context ? context : "",
.chart_type = chart_type ? chart_type : "",
.priority = priority,
.update_every = update_every
};
printf("CHART '%s.%s' '' '%s' '%s' '%s' '%s' '%s' %ld %d '' '" LOGS_MANAGEMENT_PLUGIN_STR "' ''\n",
cs.type,
cs.id,
cs.title,
cs.units,
cs.family,
cs.context,
cs.chart_type,
cs.priority,
cs.update_every
);
return cs;
}
static inline void lgs_mng_add_dim( const char *id,
const char *algorithm,
collected_number multiplier,
collected_number divisor){
printf("DIMENSION '%s' '' '%s' %lld %lld\n", id, algorithm, multiplier, divisor);
}
static inline void lgs_mng_add_dim_post_init( struct Chart_str *cs,
const char *dim_id,
const char *algorithm,
collected_number multiplier,
collected_number divisor){
printf("CHART '%s.%s' '' '%s' '%s' '%s' '%s' '%s' %ld %d '' '" LOGS_MANAGEMENT_PLUGIN_STR "' ''\n",
cs->type,
cs->id,
cs->title,
cs->units,
cs->family,
cs->context,
cs->chart_type,
cs->priority,
cs->update_every
);
lgs_mng_add_dim(dim_id, algorithm, multiplier, divisor);
}
static inline void lgs_mng_update_chart_begin(const char *type, const char *id){
printf("BEGIN '%s.%s'\n", type, id);
}
static inline void lgs_mng_update_chart_set(const char *id, collected_number val){
printf("SET '%s' = %lld\n", id, val);
}
static inline void lgs_mng_update_chart_end(time_t sec){
printf("END %" PRId64 " 0 1\n", sec);
}
#define lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio){ \
\
/* Number of collected logs total - initialise */ \
if(p_file_info->parser_config->chart_config & CHART_COLLECTED_LOGS_TOTAL){ \
lgs_mng_create_chart( \
(char *) p_file_info->chartname /* type */ \
, "collected_logs_total" /* id */ \
, CHART_TITLE_TOTAL_COLLECTED_LOGS /* title */ \
, "log records" /* units */ \
, "collected_logs" /* family */ \
, NULL /* context */ \
, RRDSET_TYPE_AREA_NAME /* chart_type */ \
, ++chart_prio /* priority */ \
, p_file_info->update_every /* update_every */ \
); \
lgs_mng_add_dim("total records", RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1); \
} \
\
/* Number of collected logs rate - initialise */ \
if(p_file_info->parser_config->chart_config & CHART_COLLECTED_LOGS_RATE){ \
lgs_mng_create_chart( \
(char *) p_file_info->chartname /* type */ \
, "collected_logs_rate" /* id */ \
, CHART_TITLE_RATE_COLLECTED_LOGS /* title */ \
, "log records" /* units */ \
, "collected_logs" /* family */ \
, NULL /* context */ \
, RRDSET_TYPE_LINE_NAME /* chart_type */ \
, ++chart_prio /* priority */ \
, p_file_info->update_every /* update_every */ \
); \
lgs_mng_add_dim("records", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1); \
} \
\
} \
#define lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data){ \
\
/* Number of collected logs total - update previous values */ \
if(p_file_info->parser_config->chart_config & CHART_COLLECTED_LOGS_TOTAL){ \
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec; \
sec < p_file_info->parser_metrics->last_update; \
sec++){ \
lgs_mng_update_chart_begin(p_file_info->chartname, "collected_logs_total"); \
lgs_mng_update_chart_set("total records", chart_data->num_lines); \
lgs_mng_update_chart_end(sec); \
} \
} \
\
/* Number of collected logs rate - update previous values */ \
if(p_file_info->parser_config->chart_config & CHART_COLLECTED_LOGS_RATE){ \
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec; \
sec < p_file_info->parser_metrics->last_update; \
sec++){ \
lgs_mng_update_chart_begin(p_file_info->chartname, "collected_logs_rate"); \
lgs_mng_update_chart_set("records", chart_data->num_lines); \
lgs_mng_update_chart_end(sec); \
} \
} \
\
chart_data->num_lines = p_file_info->parser_metrics->num_lines; \
\
/* Number of collected logs total - update */ \
if(p_file_info->parser_config->chart_config & CHART_COLLECTED_LOGS_TOTAL){ \
lgs_mng_update_chart_begin( (char *) p_file_info->chartname, "collected_logs_total"); \
lgs_mng_update_chart_set("total records", chart_data->num_lines); \
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update); \
} \
\
/* Number of collected logs rate - update */ \
if(p_file_info->parser_config->chart_config & CHART_COLLECTED_LOGS_RATE){ \
lgs_mng_update_chart_begin( (char *) p_file_info->chartname, "collected_logs_rate"); \
lgs_mng_update_chart_set("records", chart_data->num_lines); \
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update); \
} \
}
#define lgs_mng_do_custom_charts_init(p_file_info) { \
\
for(int cus_off = 0; p_file_info->parser_cus_config[cus_off]; cus_off++){ \
\
Chart_data_cus_t *cus; \
Chart_data_cus_t **p_cus = &p_file_info->chart_meta->chart_data_cus_arr; \
\
for(cus = p_file_info->chart_meta->chart_data_cus_arr; \
cus; \
cus = cus->next){ \
\
if(!strcmp(cus->id, p_file_info->parser_cus_config[cus_off]->chartname)) \
break; \
\
p_cus = &(cus->next); \
} \
\
if(!cus){ \
cus = callocz(1, sizeof(Chart_data_cus_t)); \
*p_cus = cus; \
\
cus->id = p_file_info->parser_cus_config[cus_off]->chartname; \
\
lgs_mng_create_chart( \
(char *) p_file_info->chartname /* type */ \
, cus->id /* id */ \
, cus->id /* title */ \
, "matches" /* units */ \
, "custom_charts" /* family */ \
, NULL /* context */ \
, RRDSET_TYPE_AREA_NAME /* chart_type */ \
, p_file_info->chart_meta->base_prio + 1000 + cus_off /* priority */ \
, p_file_info->update_every /* update_every */ \
); \
} \
\
cus->dims = reallocz(cus->dims, ++cus->dims_size * sizeof(struct chart_data_cus_dim)); \
cus->dims[cus->dims_size - 1].name = \
p_file_info->parser_cus_config[cus_off]->regex_name; \
cus->dims[cus->dims_size - 1].val = 0; \
cus->dims[cus->dims_size - 1].p_counter = \
&p_file_info->parser_metrics->parser_cus[cus_off]->count; \
\
lgs_mng_add_dim(cus->dims[cus->dims_size - 1].name, \
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1); \
\
} \
}
#define lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec) { \
\
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec; \
sec < p_file_info->parser_metrics->last_update; \
sec++){ \
\
for(Chart_data_cus_t *cus = p_file_info->chart_meta->chart_data_cus_arr; \
cus; \
cus = cus->next){ \
\
lgs_mng_update_chart_begin(p_file_info->chartname, cus->id); \
\
for(int d_idx = 0; d_idx < cus->dims_size; d_idx++) \
lgs_mng_update_chart_set(cus->dims[d_idx].name, cus->dims[d_idx].val); \
\
lgs_mng_update_chart_end(sec); \
} \
\
} \
\
for(Chart_data_cus_t *cus = p_file_info->chart_meta->chart_data_cus_arr; \
cus; \
cus = cus->next){ \
\
lgs_mng_update_chart_begin(p_file_info->chartname, cus->id); \
\
for(int d_idx = 0; d_idx < cus->dims_size; d_idx++){ \
\
cus->dims[d_idx].val += *(cus->dims[d_idx].p_counter); \
*(cus->dims[d_idx].p_counter) = 0; \
\
lgs_mng_update_chart_set(cus->dims[d_idx].name, cus->dims[d_idx].val); \
} \
\
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update); \
} \
}
#endif // RRD_API_H_

View File

@ -0,0 +1,137 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_docker_ev.h"
void docker_ev_chart_init(struct File_info *p_file_info){
p_file_info->chart_meta->chart_data_docker_ev = callocz(1, sizeof (struct Chart_data_docker_ev));
p_file_info->chart_meta->chart_data_docker_ev->last_update = now_realtime_sec(); // initial value shouldn't be 0
long chart_prio = p_file_info->chart_meta->base_prio;
lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio);
/* Docker events type - initialise */
if(p_file_info->parser_config->chart_config & CHART_DOCKER_EV_TYPE){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "events_type" // id
, "Events type" // title
, "events types" // units
, "event_type" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int idx = 0; idx < NUM_OF_DOCKER_EV_TYPES; idx++)
lgs_mng_add_dim(docker_ev_type_string[idx], RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Docker events actions - initialise */
if(p_file_info->parser_config->chart_config & CHART_DOCKER_EV_ACTION){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "events_action" // id
, "Events action" // title
, "events actions" // units
, "event_action" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int ev_off = 0; ev_off < NUM_OF_DOCKER_EV_TYPES; ev_off++){
int act_off = -1;
while(docker_ev_action_string[ev_off][++act_off] != NULL){
char dim[50];
snprintfz(dim, 50, "%s %s",
docker_ev_type_string[ev_off],
docker_ev_action_string[ev_off][act_off]);
lgs_mng_add_dim(dim, RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
}
}
lgs_mng_do_custom_charts_init(p_file_info);
}
void docker_ev_chart_update(struct File_info *p_file_info){
chart_data_docker_ev_t *chart_data = p_file_info->chart_meta->chart_data_docker_ev;
if(chart_data->last_update != p_file_info->parser_metrics->last_update){
time_t lag_in_sec = p_file_info->parser_metrics->last_update - chart_data->last_update - 1;
lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data);
/* Docker events type - update */
if(p_file_info->parser_config->chart_config & CHART_DOCKER_EV_TYPE){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "events_type");
for(int idx = 0; idx < NUM_OF_DOCKER_EV_TYPES; idx++)
lgs_mng_update_chart_set(docker_ev_type_string[idx], chart_data->num_dock_ev_type[idx]);
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "events_type");
for(int idx = 0; idx < NUM_OF_DOCKER_EV_TYPES; idx++){
chart_data->num_dock_ev_type[idx] = p_file_info->parser_metrics->docker_ev->ev_type[idx];
lgs_mng_update_chart_set(docker_ev_type_string[idx], chart_data->num_dock_ev_type[idx]);
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Docker events action - update */
if(p_file_info->parser_config->chart_config & CHART_DOCKER_EV_ACTION){
char dim[50];
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "events_action");
for(int ev_off = 0; ev_off < NUM_OF_DOCKER_EV_TYPES; ev_off++){
int act_off = -1;
while(docker_ev_action_string[ev_off][++act_off] != NULL){
if(chart_data->num_dock_ev_action[ev_off][act_off]){
snprintfz(dim, 50, "%s %s",
docker_ev_type_string[ev_off],
docker_ev_action_string[ev_off][act_off]);
lgs_mng_update_chart_set(dim, chart_data->num_dock_ev_action[ev_off][act_off]);
}
}
}
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "events_action");
for(int ev_off = 0; ev_off < NUM_OF_DOCKER_EV_TYPES; ev_off++){
int act_off = -1;
while(docker_ev_action_string[ev_off][++act_off] != NULL){
chart_data->num_dock_ev_action[ev_off][act_off] =
p_file_info->parser_metrics->docker_ev->ev_action[ev_off][act_off];
if(chart_data->num_dock_ev_action[ev_off][act_off]){
snprintfz(dim, 50, "%s %s",
docker_ev_type_string[ev_off],
docker_ev_action_string[ev_off][act_off]);
lgs_mng_update_chart_set(dim, chart_data->num_dock_ev_action[ev_off][act_off]);
}
}
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec);
chart_data->last_update = p_file_info->parser_metrics->last_update;
}
}

View File

@ -0,0 +1,39 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file plugins_logsmanagement_docker_ev.h
* @brief Incudes the structure and function definitions
* for the docker event log charts.
*/
#ifndef RRD_API_DOCKER_EV_H_
#define RRD_API_DOCKER_EV_H_
#include "daemon/common.h"
struct File_info;
typedef struct Chart_data_docker_ev chart_data_docker_ev_t;
#include "../file_info.h"
#include "../circular_buffer.h"
#include "rrd_api.h"
struct Chart_data_docker_ev {
time_t last_update;
/* Number of collected log records */
collected_number num_lines;
/* Docker events metrics - event type */
collected_number num_dock_ev_type[NUM_OF_DOCKER_EV_TYPES];
/* Docker events metrics - action type */
collected_number num_dock_ev_action[NUM_OF_DOCKER_EV_TYPES][NUM_OF_CONTAINER_ACTIONS];
};
void docker_ev_chart_init(struct File_info *p_file_info);
void docker_ev_chart_update(struct File_info *p_file_info);
#endif // RRD_API_DOCKER_EV_H_

View File

@ -0,0 +1,28 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_generic.h"
void generic_chart_init(struct File_info *p_file_info){
p_file_info->chart_meta->chart_data_generic = callocz(1, sizeof (struct Chart_data_generic));
p_file_info->chart_meta->chart_data_generic->last_update = now_realtime_sec(); // initial value shouldn't be 0
long chart_prio = p_file_info->chart_meta->base_prio;
lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio);
lgs_mng_do_custom_charts_init(p_file_info);
}
void generic_chart_update(struct File_info *p_file_info){
chart_data_generic_t *chart_data = p_file_info->chart_meta->chart_data_generic;
if(chart_data->last_update != p_file_info->parser_metrics->last_update){
time_t lag_in_sec = p_file_info->parser_metrics->last_update - chart_data->last_update - 1;
lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data);
lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec);
chart_data->last_update = p_file_info->parser_metrics->last_update;
}
}

View File

@ -0,0 +1,34 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file rrd_api_generic.h
* @brief Incudes the structure and function definitions for
* generic log charts.
*/
#ifndef RRD_API_GENERIC_H_
#define RRD_API_GENERIC_H_
#include "daemon/common.h"
struct File_info;
typedef struct Chart_data_generic chart_data_generic_t;
#include "../file_info.h"
#include "../circular_buffer.h"
#include "rrd_api.h"
struct Chart_data_generic {
time_t last_update;
/* Number of collected log records */
collected_number num_lines;
};
void generic_chart_init(struct File_info *p_file_info);
void generic_chart_update(struct File_info *p_file_info);
#endif // RRD_API_GENERIC_H_

View File

@ -0,0 +1,168 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_kernel.h"
void kernel_chart_init(struct File_info *p_file_info){
p_file_info->chart_meta->chart_data_kernel = callocz(1, sizeof (struct Chart_data_kernel));
chart_data_kernel_t *chart_data = p_file_info->chart_meta->chart_data_kernel;
chart_data->last_update = now_realtime_sec(); // initial value shouldn't be 0
long chart_prio = p_file_info->chart_meta->base_prio;
lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio);
/* Syslog severity level (== Systemd priority) - initialise */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_SEVER){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "severity_levels" // id
, "Severity Levels" // title
, "severity levels" // units
, "severity" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int i = 0; i < SYSLOG_SEVER_ARR_SIZE; i++)
lgs_mng_add_dim(dim_sever_str[i], RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Subsystem - initialise */
if(p_file_info->parser_config->chart_config & CHART_KMSG_SUBSYSTEM){
chart_data->cs_subsys = lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "subsystems" // id
, "Subsystems" // title
, "subsystems" // units
, "subsystem" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
}
/* Device - initialise */
if(p_file_info->parser_config->chart_config & CHART_KMSG_DEVICE){
chart_data->cs_device = lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "devices" // id
, "Devices" // title
, "devices" // units
, "device" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
}
lgs_mng_do_custom_charts_init(p_file_info);
}
void kernel_chart_update(struct File_info *p_file_info){
chart_data_kernel_t *chart_data = p_file_info->chart_meta->chart_data_kernel;
if(chart_data->last_update != p_file_info->parser_metrics->last_update){
time_t lag_in_sec = p_file_info->parser_metrics->last_update - chart_data->last_update - 1;
lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data);
/* Syslog severity level (== Systemd priority) - update */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_SEVER){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "severity_levels");
for(int idx = 0; idx < SYSLOG_SEVER_ARR_SIZE; idx++)
lgs_mng_update_chart_set(dim_sever_str[idx], chart_data->num_sever[idx]);
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "severity_levels");
for(int idx = 0; idx < SYSLOG_SEVER_ARR_SIZE; idx++){
chart_data->num_sever[idx] = p_file_info->parser_metrics->kernel->sever[idx];
lgs_mng_update_chart_set(dim_sever_str[idx], chart_data->num_sever[idx]);
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Subsystem - update */
if(p_file_info->parser_config->chart_config & CHART_KMSG_SUBSYSTEM){
metrics_dict_item_t *it;
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "subsystems");
dfe_start_read(p_file_info->parser_metrics->kernel->subsystem, it){
if(it->dim_initialized)
lgs_mng_update_chart_set(it_dfe.name, (collected_number) it->num);
}
dfe_done(it);
lgs_mng_update_chart_end(sec);
}
dfe_start_write(p_file_info->parser_metrics->kernel->subsystem, it){
if(!it->dim_initialized){
it->dim_initialized = true;
lgs_mng_add_dim_post_init( &chart_data->cs_subsys, it_dfe.name,
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
}
dfe_done(it);
lgs_mng_update_chart_begin(p_file_info->chartname, "subsystems");
dfe_start_write(p_file_info->parser_metrics->kernel->subsystem, it){
it->num = it->num_new;
lgs_mng_update_chart_set(it_dfe.name, (collected_number) it->num);
}
dfe_done(it);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Device - update */
if(p_file_info->parser_config->chart_config & CHART_KMSG_DEVICE){
metrics_dict_item_t *it;
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "devices");
dfe_start_read(p_file_info->parser_metrics->kernel->device, it){
if(it->dim_initialized)
lgs_mng_update_chart_set(it_dfe.name, (collected_number) it->num);
}
dfe_done(it);
lgs_mng_update_chart_end(sec);
}
dfe_start_write(p_file_info->parser_metrics->kernel->device, it){
if(!it->dim_initialized){
it->dim_initialized = true;
lgs_mng_add_dim_post_init( &chart_data->cs_device, it_dfe.name,
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
}
dfe_done(it);
lgs_mng_update_chart_begin(p_file_info->chartname, "devices");
dfe_start_write(p_file_info->parser_metrics->kernel->device, it){
it->num = it->num_new;
lgs_mng_update_chart_set(it_dfe.name, (collected_number) it->num);
}
dfe_done(it);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec);
chart_data->last_update = p_file_info->parser_metrics->last_update;
}
}

View File

@ -0,0 +1,46 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file rrd_api_kernel.h
* @brief Incudes the structure and function definitions
* for the kernel log charts.
*/
#ifndef RRD_API_KERNEL_H_
#define RRD_API_KERNEL_H_
#include "daemon/common.h"
struct File_info;
typedef struct Chart_data_kernel chart_data_kernel_t;
#include "../file_info.h"
#include "../circular_buffer.h"
#include "rrd_api.h"
#include "rrd_api_systemd.h" // required for dim_sever_str[]
struct Chart_data_kernel {
time_t last_update;
/* Number of collected log records */
collected_number num_lines;
/* Kernel metrics - Syslog Severity value */
collected_number num_sever[SYSLOG_SEVER_ARR_SIZE];
/* Kernel metrics - Subsystem */
struct Chart_str cs_subsys;
// Special case: Subsystem dimension and number are part of Kernel_metrics_t
/* Kernel metrics - Device */
struct Chart_str cs_device;
// Special case: Device dimension and number are part of Kernel_metrics_t
};
void kernel_chart_init(struct File_info *p_file_info);
void kernel_chart_update(struct File_info *p_file_info);
#endif // RRD_API_KERNEL_H_

View File

@ -0,0 +1,79 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_mqtt.h"
void mqtt_chart_init(struct File_info *p_file_info){
p_file_info->chart_meta->chart_data_mqtt = callocz(1, sizeof (struct Chart_data_mqtt));
chart_data_mqtt_t *chart_data = p_file_info->chart_meta->chart_data_mqtt;
chart_data->last_update = now_realtime_sec(); // initial value shouldn't be 0
long chart_prio = p_file_info->chart_meta->base_prio;
lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio);
/* Topic - initialise */
if(p_file_info->parser_config->chart_config & CHART_MQTT_TOPIC){
chart_data->cs_topic = lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "topics" // id
, "Topics" // title
, "topics" // units
, "topic" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
}
lgs_mng_do_custom_charts_init(p_file_info);
}
void mqtt_chart_update(struct File_info *p_file_info){
chart_data_mqtt_t *chart_data = p_file_info->chart_meta->chart_data_mqtt;
if(chart_data->last_update != p_file_info->parser_metrics->last_update){
time_t lag_in_sec = p_file_info->parser_metrics->last_update - chart_data->last_update - 1;
lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data);
/* Topic - update */
if(p_file_info->parser_config->chart_config & CHART_MQTT_TOPIC){
metrics_dict_item_t *it;
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "topics");
dfe_start_read(p_file_info->parser_metrics->mqtt->topic, it){
if(it->dim_initialized)
lgs_mng_update_chart_set(it_dfe.name, (collected_number) it->num);
}
dfe_done(it);
lgs_mng_update_chart_end(sec);
}
dfe_start_write(p_file_info->parser_metrics->mqtt->topic, it){
if(!it->dim_initialized){
it->dim_initialized = true;
lgs_mng_add_dim_post_init( &chart_data->cs_topic, it_dfe.name,
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
}
dfe_done(it);
lgs_mng_update_chart_begin(p_file_info->chartname, "topics");
dfe_start_write(p_file_info->parser_metrics->mqtt->topic, it){
it->num = it->num_new;
lgs_mng_update_chart_set(it_dfe.name, (collected_number) it->num);
}
dfe_done(it);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec);
chart_data->last_update = p_file_info->parser_metrics->last_update;
}
}

View File

@ -0,0 +1,37 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file rrd_api_mqtt.h
* @brief Incudes the structure and function definitions
* for the mqtt log charts.
*/
#ifndef RRD_API_MQTT_H_
#define RRD_API_MQTT_H_
#include "daemon/common.h"
struct File_info;
typedef struct Chart_data_mqtt chart_data_mqtt_t;
#include "../file_info.h"
#include "../circular_buffer.h"
#include "rrd_api.h"
struct Chart_data_mqtt {
time_t last_update;
/* Number of collected log records */
collected_number num_lines;
/* MQTT metrics - Topic */
struct Chart_str cs_topic;
// Special case: Topic dimension and number are part of Mqtt_metrics_t
};
void mqtt_chart_init(struct File_info *p_file_info);
void mqtt_chart_update(struct File_info *p_file_info);
#endif // RRD_API_MQTT_H_

View File

@ -0,0 +1,298 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_stats.h"
static const char *const rrd_type = "netdata";
static char **dim_db_timings_write, **dim_db_timings_rotate;
extern bool logsmanagement_should_exit;
static void stats_charts_update(void){
/* Circular buffer total memory stats - update */
lgs_mng_update_chart_begin(rrd_type, "circular_buffers_mem_total_cached");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->circ_buff->total_cached_mem, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* Circular buffer number of items - update */
lgs_mng_update_chart_begin(rrd_type, "circular_buffers_num_of_items");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname, p_file_info->circ_buff->num_of_items);
}
lgs_mng_update_chart_end(0);
/* Circular buffer uncompressed buffered items memory stats - update */
lgs_mng_update_chart_begin(rrd_type, "circular_buffers_mem_uncompressed_used");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->circ_buff->text_size_total, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* Circular buffer compressed buffered items memory stats - update */
lgs_mng_update_chart_begin(rrd_type, "circular_buffers_mem_compressed_used");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->circ_buff->text_compressed_size_total, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* Compression stats - update */
lgs_mng_update_chart_begin(rrd_type, "average_compression_ratio");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->circ_buff->compression_ratio, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* DB disk usage stats - update */
lgs_mng_update_chart_begin(rrd_type, "database_disk_usage");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->blob_total_size, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* DB timings - update */
lgs_mng_update_chart_begin(rrd_type, "database_timings");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(dim_db_timings_write[i],
__atomic_exchange_n(&p_file_info->db_write_duration, 0, __ATOMIC_RELAXED));
lgs_mng_update_chart_set(dim_db_timings_rotate[i],
__atomic_exchange_n(&p_file_info->db_rotate_duration, 0, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* Query CPU time per byte (user) - update */
lgs_mng_update_chart_begin(rrd_type, "query_cpu_time_per_MiB_user");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->cpu_time_per_mib.user, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
/* Query CPU time per byte (user) - update */
lgs_mng_update_chart_begin(rrd_type, "query_cpu_time_per_MiB_sys");
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
if(!p_file_info->parser_config)
continue;
lgs_mng_update_chart_set(p_file_info->chartname,
__atomic_load_n(&p_file_info->cpu_time_per_mib.sys, __ATOMIC_RELAXED));
}
lgs_mng_update_chart_end(0);
}
void stats_charts_init(void *arg){
netdata_mutex_t *p_stdout_mut = (netdata_mutex_t *) arg;
netdata_mutex_lock(p_stdout_mut);
int chart_prio = NETDATA_CHART_PRIO_LOGS_STATS_BASE;
/* Circular buffer total memory stats - initialise */
lgs_mng_create_chart(
rrd_type // type
, "circular_buffers_mem_total_cached" // id
, "Circular buffers total cached memory" // title
, "bytes" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
/* Circular buffer number of items - initialise */
lgs_mng_create_chart(
rrd_type // type
, "circular_buffers_num_of_items" // id
, "Circular buffers number of items" // title
, "items" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_LINE_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
/* Circular buffer uncompressed buffered items memory stats - initialise */
lgs_mng_create_chart(
rrd_type // type
, "circular_buffers_mem_uncompressed_used" // id
, "Circular buffers used memory for uncompressed logs" // title
, "bytes" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
/* Circular buffer compressed buffered items memory stats - initialise */
lgs_mng_create_chart(
rrd_type // type
, "circular_buffers_mem_compressed_used" // id
, "Circular buffers used memory for compressed logs" // title
, "bytes" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
/* Compression stats - initialise */
lgs_mng_create_chart(
rrd_type // type
, "average_compression_ratio" // id
, "Average compression ratio" // title
, "uncompressed / compressed ratio" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_LINE_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
/* DB disk usage stats - initialise */
lgs_mng_create_chart(
rrd_type // type
, "database_disk_usage" // id
, "Database disk usage" // title
, "bytes" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
/* DB timings - initialise */
lgs_mng_create_chart(
rrd_type // type
, "database_timings" // id
, "Database timings" // title
, "ns" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++){
struct File_info *p_file_info = p_file_infos_arr->data[i];
dim_db_timings_write = reallocz(dim_db_timings_write, (i + 1) * sizeof(char *));
dim_db_timings_rotate = reallocz(dim_db_timings_rotate, (i + 1) * sizeof(char *));
dim_db_timings_write[i] = mallocz(snprintf(NULL, 0, "%s_write", p_file_info->chartname) + 1);
sprintf(dim_db_timings_write[i], "%s_write", p_file_info->chartname);
lgs_mng_add_dim(dim_db_timings_write[i], RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
dim_db_timings_rotate[i] = mallocz(snprintf(NULL, 0, "%s_rotate", p_file_info->chartname) + 1);
sprintf(dim_db_timings_rotate[i], "%s_rotate", p_file_info->chartname);
lgs_mng_add_dim(dim_db_timings_rotate[i], RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
}
/* Query CPU time per byte (user) - initialise */
lgs_mng_create_chart(
rrd_type // type
, "query_cpu_time_per_MiB_user" // id
, "CPU user time per MiB of query results" // title
, "usec/MiB" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
/* Query CPU time per byte (system) - initialise */
lgs_mng_create_chart(
rrd_type // type
, "query_cpu_time_per_MiB_sys" // id
, "CPU system time per MiB of query results" // title
, "usec/MiB" // units
, "logsmanagement" // family
, NULL // context
, RRDSET_TYPE_STACKED_NAME // chart_type
, ++chart_prio // priority
, g_logs_manag_config.update_every // update_every
);
for(int i = 0; i < p_file_infos_arr->count; i++)
lgs_mng_add_dim(p_file_infos_arr->data[i]->chartname, RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
netdata_mutex_unlock(p_stdout_mut);
heartbeat_t hb;
heartbeat_init(&hb);
usec_t step_ut = g_logs_manag_config.update_every * USEC_PER_SEC;
while (0 == __atomic_load_n(&logsmanagement_should_exit, __ATOMIC_RELAXED)) {
heartbeat_next(&hb, step_ut);
netdata_mutex_lock(p_stdout_mut);
stats_charts_update();
fflush(stdout);
netdata_mutex_unlock(p_stdout_mut);
}
collector_info("[stats charts]: thread exiting...");
}

View File

@ -0,0 +1,19 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file rrd_api_stats.h
* @brief Incudes the structure and function definitions
* for logs management stats charts.
*/
#ifndef RRD_API_STATS_H_
#define RRD_API_STATS_H_
#include "daemon/common.h"
struct File_info;
#include "../file_info.h"
void stats_charts_init(void *arg);
#endif // RRD_API_STATS_H_

View File

@ -0,0 +1,206 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_systemd.h"
const char *dim_sever_str[SYSLOG_SEVER_ARR_SIZE] = {
"0:Emergency",
"1:Alert",
"2:Critical",
"3:Error",
"4:Warning",
"5:Notice",
"6:Informational",
"7:Debug",
"uknown"
};
static const char *dim_facil_str[SYSLOG_FACIL_ARR_SIZE] = {
"0:kernel",
"1:user-level",
"2:mail",
"3:system",
"4:sec/auth",
"5:syslog",
"6:lpd/printer",
"7:news/nntp",
"8:uucp",
"9:time",
"10:sec/auth",
"11:ftp",
"12:ntp",
"13:logaudit",
"14:logalert",
"15:clock",
"16:local0",
"17:local1",
"18:local2",
"19:local3",
"20:local4",
"21:local5",
"22:local6",
"23:local7",
"uknown"
};
void systemd_chart_init(struct File_info *p_file_info){
p_file_info->chart_meta->chart_data_systemd = callocz(1, sizeof (struct Chart_data_systemd));
chart_data_systemd_t *chart_data = p_file_info->chart_meta->chart_data_systemd;
chart_data->last_update = now_realtime_sec(); // initial value shouldn't be 0
long chart_prio = p_file_info->chart_meta->base_prio;
lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio);
/* Syslog priority value - initialise */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_PRIOR){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "priority_values" // id
, "Priority Values" // title
, "priority values" // units
, "priority" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int i = 0; i < SYSLOG_PRIOR_ARR_SIZE - 1; i++){
char dim_id[4];
snprintfz(dim_id, 4, "%d", i);
chart_data->dim_prior[i] = strdupz(dim_id);
lgs_mng_add_dim(chart_data->dim_prior[i], RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
chart_data->dim_prior[SYSLOG_PRIOR_ARR_SIZE - 1] = "uknown";
lgs_mng_add_dim(chart_data->dim_prior[SYSLOG_PRIOR_ARR_SIZE - 1],
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Syslog severity level (== Systemd priority) - initialise */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_SEVER){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "severity_levels" // id
, "Severity Levels" // title
, "severity levels" // units
, "priority" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int i = 0; i < SYSLOG_SEVER_ARR_SIZE; i++)
lgs_mng_add_dim(dim_sever_str[i], RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Syslog facility level - initialise */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_FACIL){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "facility_levels" // id
, "Facility Levels" // title
, "facility levels" // units
, "priority" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int i = 0; i < SYSLOG_FACIL_ARR_SIZE; i++)
lgs_mng_add_dim(dim_facil_str[i], RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
lgs_mng_do_custom_charts_init(p_file_info);
}
void systemd_chart_update(struct File_info *p_file_info){
chart_data_systemd_t *chart_data = p_file_info->chart_meta->chart_data_systemd;
if(chart_data->last_update != p_file_info->parser_metrics->last_update){
time_t lag_in_sec = p_file_info->parser_metrics->last_update - chart_data->last_update - 1;
lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data);
/* Syslog priority value - update */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_PRIOR){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "priority_values");
for(int idx = 0; idx < SYSLOG_PRIOR_ARR_SIZE; idx++){
if(chart_data->num_prior[idx])
lgs_mng_update_chart_set(chart_data->dim_prior[idx], chart_data->num_prior[idx]);
}
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "priority_values");
for(int idx = 0; idx < SYSLOG_PRIOR_ARR_SIZE; idx++){
if(p_file_info->parser_metrics->systemd->prior[idx]){
chart_data->num_prior[idx] = p_file_info->parser_metrics->systemd->prior[idx];
lgs_mng_update_chart_set(chart_data->dim_prior[idx], chart_data->num_prior[idx]);
}
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Syslog severity level (== Systemd priority) - update chart */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_SEVER){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "severity_levels");
for(int idx = 0; idx < SYSLOG_SEVER_ARR_SIZE; idx++){
if(chart_data->num_sever[idx])
lgs_mng_update_chart_set(dim_sever_str[idx], chart_data->num_sever[idx]);
}
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "severity_levels");
for(int idx = 0; idx < SYSLOG_SEVER_ARR_SIZE; idx++){
if(p_file_info->parser_metrics->systemd->sever[idx]){
chart_data->num_sever[idx] = p_file_info->parser_metrics->systemd->sever[idx];
lgs_mng_update_chart_set(dim_sever_str[idx], chart_data->num_sever[idx]);
}
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Syslog facility value - update chart */
if(p_file_info->parser_config->chart_config & CHART_SYSLOG_FACIL){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "facility_levels");
for(int idx = 0; idx < SYSLOG_FACIL_ARR_SIZE; idx++){
if(chart_data->num_facil[idx])
lgs_mng_update_chart_set(dim_facil_str[idx], chart_data->num_facil[idx]);
}
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "facility_levels");
for(int idx = 0; idx < SYSLOG_FACIL_ARR_SIZE; idx++){
if(p_file_info->parser_metrics->systemd->facil[idx]){
chart_data->num_facil[idx] = p_file_info->parser_metrics->systemd->facil[idx];
lgs_mng_update_chart_set(dim_facil_str[idx], chart_data->num_facil[idx]);
}
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec);
chart_data->last_update = p_file_info->parser_metrics->last_update;
}
}

View File

@ -0,0 +1,45 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file plugins_logsmanagement_systemd.h
* @brief Incudes the structure and function definitions
* for the systemd log charts.
*/
#ifndef RRD_API_SYSTEMD_H_
#define RRD_API_SYSTEMD_H_
#include "daemon/common.h"
struct File_info;
typedef struct Chart_data_systemd chart_data_systemd_t;
#include "../file_info.h"
#include "../circular_buffer.h"
#include "rrd_api.h"
extern const char *dim_sever_str[SYSLOG_SEVER_ARR_SIZE];
struct Chart_data_systemd {
time_t last_update;
/* Number of collected log records */
collected_number num_lines;
/* Systemd metrics - Syslog Priority value */
char *dim_prior[193];
collected_number num_prior[193];
/* Systemd metrics - Syslog Severity value */
collected_number num_sever[9];
/* Systemd metrics - Syslog Facility value */
collected_number num_facil[25];
};
void systemd_chart_init(struct File_info *p_file_info);
void systemd_chart_update(struct File_info *p_file_info);
#endif // RRD_API_SYSTEMD_H_

View File

@ -0,0 +1,716 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "rrd_api_web_log.h"
void web_log_chart_init(struct File_info *p_file_info){
p_file_info->chart_meta->chart_data_web_log = callocz(1, sizeof (struct Chart_data_web_log));
chart_data_web_log_t *chart_data = p_file_info->chart_meta->chart_data_web_log;
chart_data->last_update = now_realtime_sec(); // initial value shouldn't be 0
long chart_prio = p_file_info->chart_meta->base_prio;
lgs_mng_do_num_of_logs_charts_init(p_file_info, chart_prio);
/* Vhost - initialise */
if(p_file_info->parser_config->chart_config & CHART_VHOST){
chart_data->cs_vhosts = lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "vhost" // id
, "Requests by Vhost" // title
, "requests" // units
, "vhost" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
}
/* Port - initialise */
if(p_file_info->parser_config->chart_config & CHART_PORT){
chart_data->cs_ports = lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "port" // id
, "Requests by Port" // title
, "requests" // units
, "port" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
}
/* IP Version - initialise */
if(p_file_info->parser_config->chart_config & CHART_IP_VERSION){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "ip_version" // id
, "Requests by IP version" // title
, "requests" // units
, "ip_version" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("ipv4", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("ipv6", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("invalid", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Request client current poll - initialise */
if(p_file_info->parser_config->chart_config & CHART_REQ_CLIENT_CURRENT){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "clients" // id
, "Current Poll Unique Client IPs" // title
, "unique ips" // units
, "clients" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("ipv4", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("ipv6", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Request client all-time - initialise */
if(p_file_info->parser_config->chart_config & CHART_REQ_CLIENT_ALL_TIME){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "clients_all" // id
, "All Time Unique Client IPs" // title
, "unique ips" // units
, "clients" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("ipv4", RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
lgs_mng_add_dim("ipv6", RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1);
}
/* Request methods - initialise */
if(p_file_info->parser_config->chart_config & CHART_REQ_METHODS){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "http_methods" // id
, "Requests Per HTTP Method" // title
, "requests" // units
, "http_methods" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int j = 0; j < REQ_METHOD_ARR_SIZE; j++)
lgs_mng_add_dim(req_method_str[j], RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Request protocol - initialise */
if(p_file_info->parser_config->chart_config & CHART_REQ_PROTO){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "http_versions" // id
, "Requests Per HTTP Version" // title
, "requests" // units
, "http_versions" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("1.0", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("1.1", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("2.0", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("other", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Request bandwidth - initialise */
if(p_file_info->parser_config->chart_config & CHART_BANDWIDTH){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "bandwidth" // id
, "Bandwidth" // title
, "kilobits" // units
, "bandwidth" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("received", RRD_ALGORITHM_INCREMENTAL_NAME, 8, 1000);
lgs_mng_add_dim("sent", RRD_ALGORITHM_INCREMENTAL_NAME, -8, 1000);
}
/* Request processing time - initialise */
if(p_file_info->parser_config->chart_config & CHART_REQ_PROC_TIME){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "timings" // id
, "Request Processing Time" // title
, "milliseconds" // units
, "timings" // family
, NULL // context
, RRDSET_TYPE_LINE_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("min", RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1000);
lgs_mng_add_dim("max", RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1000);
lgs_mng_add_dim("avg", RRD_ALGORITHM_ABSOLUTE_NAME, 1, 1000);
}
/* Response code family - initialise */
if(p_file_info->parser_config->chart_config & CHART_RESP_CODE_FAMILY){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "responses" // id
, "Response Codes" // title
, "requests" // units
, "responses" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("1xx", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("2xx", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("3xx", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("4xx", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("5xx", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("other", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* Response code - initialise */
if(p_file_info->parser_config->chart_config & CHART_RESP_CODE){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "detailed_responses" // id
, "Detailed Response Codes" // title
, "requests" // units
, "responses" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
for(int idx = 0; idx < RESP_CODE_ARR_SIZE - 1; idx++){
char dim_name[4];
snprintfz(dim_name, 4, "%d", idx + 100);
lgs_mng_add_dim(dim_name, RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
}
/* Response code type - initialise */
if(p_file_info->parser_config->chart_config & CHART_RESP_CODE_TYPE){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "response_types" // id
, "Response Statuses" // title
, "requests" // units
, "responses" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("success", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("redirect", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("bad", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("error", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("other", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* SSL protocol - initialise */
if(p_file_info->parser_config->chart_config & CHART_SSL_PROTO){
lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "ssl_protocol" // id
, "Requests Per SSL Protocol" // title
, "requests" // units
, "ssl_protocol" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
lgs_mng_add_dim("TLSV1", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("TLSV1.1", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("TLSV1.2", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("TLSV1.3", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("SSLV2", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("SSLV3", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
lgs_mng_add_dim("other", RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
/* SSL cipher suite - initialise */
if(p_file_info->parser_config->chart_config & CHART_SSL_CIPHER){
chart_data->cs_ssl_ciphers = lgs_mng_create_chart(
(char *) p_file_info->chartname // type
, "ssl_cipher_suite" // id
, "Requests by SSL cipher suite" // title
, "requests" // units
, "ssl_cipher_suite" // family
, NULL // context
, RRDSET_TYPE_AREA_NAME // chart_type
, ++chart_prio // priority
, p_file_info->update_every // update_every
);
}
lgs_mng_do_custom_charts_init(p_file_info);
}
void web_log_chart_update(struct File_info *p_file_info){
chart_data_web_log_t *chart_data = p_file_info->chart_meta->chart_data_web_log;
Web_log_metrics_t *wlm = p_file_info->parser_metrics->web_log;
if(chart_data->last_update != p_file_info->parser_metrics->last_update){
time_t lag_in_sec = p_file_info->parser_metrics->last_update - chart_data->last_update - 1;
lgs_mng_do_num_of_logs_charts_update(p_file_info, lag_in_sec, chart_data);
/* Vhost - update */
if(p_file_info->parser_config->chart_config & CHART_VHOST){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "vhost");
for(int idx = 0; idx < chart_data->vhost_size; idx++)
lgs_mng_update_chart_set(wlm->vhost_arr.vhosts[idx].name, chart_data->num_vhosts[idx]);
lgs_mng_update_chart_end(sec);
}
if(wlm->vhost_arr.size > chart_data->vhost_size){
if(wlm->vhost_arr.size >= chart_data->vhost_size_max){
chart_data->vhost_size_max = wlm->vhost_arr.size * VHOST_BUFFS_SCALE_FACTOR + 1;
chart_data->num_vhosts = reallocz( chart_data->num_vhosts,
chart_data->vhost_size_max * sizeof(collected_number));
}
for(int idx = chart_data->vhost_size; idx < wlm->vhost_arr.size; idx++){
chart_data->num_vhosts[idx] = 0;
lgs_mng_add_dim_post_init( &chart_data->cs_vhosts,
wlm->vhost_arr.vhosts[idx].name,
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
chart_data->vhost_size = wlm->vhost_arr.size;
}
lgs_mng_update_chart_begin(p_file_info->chartname, "vhost");
for(int idx = 0; idx < chart_data->vhost_size; idx++){
chart_data->num_vhosts[idx] += wlm->vhost_arr.vhosts[idx].count;
wlm->vhost_arr.vhosts[idx].count = 0;
lgs_mng_update_chart_set(wlm->vhost_arr.vhosts[idx].name, chart_data->num_vhosts[idx]);
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Port - update */
if(p_file_info->parser_config->chart_config & CHART_PORT){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "port");
for(int idx = 0; idx < chart_data->port_size; idx++)
lgs_mng_update_chart_set(wlm->port_arr.ports[idx].name, chart_data->num_ports[idx]);
lgs_mng_update_chart_end(sec);
}
if(wlm->port_arr.size > chart_data->port_size){
if(wlm->port_arr.size >= chart_data->port_size_max){
chart_data->port_size_max = wlm->port_arr.size * PORT_BUFFS_SCALE_FACTOR + 1;
chart_data->num_ports = reallocz( chart_data->num_ports,
chart_data->port_size_max * sizeof(collected_number));
}
for(int idx = chart_data->port_size; idx < wlm->port_arr.size; idx++){
chart_data->num_ports[idx] = 0;
lgs_mng_add_dim_post_init( &chart_data->cs_ports,
wlm->port_arr.ports[idx].name,
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
chart_data->port_size = wlm->port_arr.size;
}
lgs_mng_update_chart_begin(p_file_info->chartname, "port");
for(int idx = 0; idx < chart_data->port_size; idx++){
chart_data->num_ports[idx] += wlm->port_arr.ports[idx].count;
wlm->port_arr.ports[idx].count = 0;
lgs_mng_update_chart_set(wlm->port_arr.ports[idx].name, chart_data->num_ports[idx]);
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* IP Version - update */
if(p_file_info->parser_config->chart_config & CHART_IP_VERSION){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "ip_version");
lgs_mng_update_chart_set("ipv4", chart_data->num_ip_ver_4);
lgs_mng_update_chart_set("ipv6", chart_data->num_ip_ver_6);
lgs_mng_update_chart_set("invalid", chart_data->num_ip_ver_invalid);
lgs_mng_update_chart_end(sec);
}
chart_data->num_ip_ver_4 += wlm->ip_ver.v4;
chart_data->num_ip_ver_6 += wlm->ip_ver.v6;
chart_data->num_ip_ver_invalid += wlm->ip_ver.invalid;
memset(&wlm->ip_ver, 0, sizeof(wlm->ip_ver));
lgs_mng_update_chart_begin(p_file_info->chartname, "ip_version");
lgs_mng_update_chart_set("ipv4", chart_data->num_ip_ver_4);
lgs_mng_update_chart_set("ipv6", chart_data->num_ip_ver_6);
lgs_mng_update_chart_set("invalid", chart_data->num_ip_ver_invalid);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Request client current poll - update */
if(p_file_info->parser_config->chart_config & CHART_REQ_CLIENT_CURRENT){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "clients");
lgs_mng_update_chart_set("ipv4", chart_data->num_req_client_current_ipv4);
lgs_mng_update_chart_set("ipv6", chart_data->num_req_client_current_ipv6);
lgs_mng_update_chart_end(sec);
}
chart_data->num_req_client_current_ipv4 += wlm->req_clients_current_arr.ipv4_size;
wlm->req_clients_current_arr.ipv4_size = 0;
chart_data->num_req_client_current_ipv6 += wlm->req_clients_current_arr.ipv6_size;
wlm->req_clients_current_arr.ipv6_size = 0;
lgs_mng_update_chart_begin(p_file_info->chartname, "clients");
lgs_mng_update_chart_set("ipv4", chart_data->num_req_client_current_ipv4);
lgs_mng_update_chart_set("ipv6", chart_data->num_req_client_current_ipv6);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Request client all-time - update */
if(p_file_info->parser_config->chart_config & CHART_REQ_CLIENT_ALL_TIME){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "clients_all");
lgs_mng_update_chart_set("ipv4", chart_data->num_req_client_all_time_ipv4);
lgs_mng_update_chart_set("ipv6", chart_data->num_req_client_all_time_ipv6);
lgs_mng_update_chart_end(sec);
}
chart_data->num_req_client_all_time_ipv4 = wlm->req_clients_alltime_arr.ipv4_size;
chart_data->num_req_client_all_time_ipv6 = wlm->req_clients_alltime_arr.ipv6_size;
lgs_mng_update_chart_begin(p_file_info->chartname, "clients_all");
lgs_mng_update_chart_set("ipv4", chart_data->num_req_client_all_time_ipv4);
lgs_mng_update_chart_set("ipv6", chart_data->num_req_client_all_time_ipv6);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Request methods - update */
if(p_file_info->parser_config->chart_config & CHART_REQ_METHODS){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "http_methods");
for(int idx = 0; idx < REQ_METHOD_ARR_SIZE; idx++){
if(chart_data->num_req_method[idx])
lgs_mng_update_chart_set(req_method_str[idx], chart_data->num_req_method[idx]);
}
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "http_methods");
for(int idx = 0; idx < REQ_METHOD_ARR_SIZE; idx++){
chart_data->num_req_method[idx] += wlm->req_method[idx];
wlm->req_method[idx] = 0;
if(chart_data->num_req_method[idx])
lgs_mng_update_chart_set(req_method_str[idx], chart_data->num_req_method[idx]);
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Request protocol - update */
if(p_file_info->parser_config->chart_config & CHART_REQ_PROTO){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "http_versions");
lgs_mng_update_chart_set("1.0", chart_data->num_req_proto_http_1);
lgs_mng_update_chart_set("1.1", chart_data->num_req_proto_http_1_1);
lgs_mng_update_chart_set("2.0", chart_data->num_req_proto_http_2);
lgs_mng_update_chart_set("other", chart_data->num_req_proto_other);
lgs_mng_update_chart_end(sec);
}
chart_data->num_req_proto_http_1 += wlm->req_proto.http_1;
chart_data->num_req_proto_http_1_1 += wlm->req_proto.http_1_1;
chart_data->num_req_proto_http_2 += wlm->req_proto.http_2;
chart_data->num_req_proto_other += wlm->req_proto.other;
memset(&wlm->req_proto, 0, sizeof(wlm->req_proto));
lgs_mng_update_chart_begin(p_file_info->chartname, "http_versions");
lgs_mng_update_chart_set("1.0", chart_data->num_req_proto_http_1);
lgs_mng_update_chart_set("1.1", chart_data->num_req_proto_http_1_1);
lgs_mng_update_chart_set("2.0", chart_data->num_req_proto_http_2);
lgs_mng_update_chart_set("other", chart_data->num_req_proto_other);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Request bandwidth - update */
if(p_file_info->parser_config->chart_config & CHART_BANDWIDTH){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "bandwidth");
lgs_mng_update_chart_set("received", chart_data->num_bandwidth_req_size);
lgs_mng_update_chart_set("sent", chart_data->num_bandwidth_resp_size);
lgs_mng_update_chart_end(sec);
}
chart_data->num_bandwidth_req_size += wlm->bandwidth.req_size;
chart_data->num_bandwidth_resp_size += wlm->bandwidth.resp_size;
memset(&wlm->bandwidth, 0, sizeof(wlm->bandwidth));
lgs_mng_update_chart_begin(p_file_info->chartname, "bandwidth");
lgs_mng_update_chart_set("received", chart_data->num_bandwidth_req_size);
lgs_mng_update_chart_set("sent", chart_data->num_bandwidth_resp_size);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Request proc time - update */
if(p_file_info->parser_config->chart_config & CHART_REQ_PROC_TIME){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "timings");
lgs_mng_update_chart_set("min", chart_data->num_req_proc_time_min);
lgs_mng_update_chart_set("max", chart_data->num_req_proc_time_max);
lgs_mng_update_chart_set("avg", chart_data->num_req_proc_time_avg);
lgs_mng_update_chart_end(sec);
}
chart_data->num_req_proc_time_min = wlm->req_proc_time.min;
chart_data->num_req_proc_time_max = wlm->req_proc_time.max;
chart_data->num_req_proc_time_avg = wlm->req_proc_time.count ?
wlm->req_proc_time.sum / wlm->req_proc_time.count : 0;
memset(&wlm->req_proc_time, 0, sizeof(wlm->req_proc_time));
lgs_mng_update_chart_begin(p_file_info->chartname, "timings");
lgs_mng_update_chart_set("min", chart_data->num_req_proc_time_min);
lgs_mng_update_chart_set("max", chart_data->num_req_proc_time_max);
lgs_mng_update_chart_set("avg", chart_data->num_req_proc_time_avg);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Response code family - update */
if(p_file_info->parser_config->chart_config & CHART_RESP_CODE_FAMILY){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "responses");
lgs_mng_update_chart_set("1xx", chart_data->num_resp_code_family_1xx);
lgs_mng_update_chart_set("2xx", chart_data->num_resp_code_family_2xx);
lgs_mng_update_chart_set("3xx", chart_data->num_resp_code_family_3xx);
lgs_mng_update_chart_set("4xx", chart_data->num_resp_code_family_4xx);
lgs_mng_update_chart_set("5xx", chart_data->num_resp_code_family_5xx);
lgs_mng_update_chart_set("other", chart_data->num_resp_code_family_other);
lgs_mng_update_chart_end(sec);
}
chart_data->num_resp_code_family_1xx += wlm->resp_code_family.resp_1xx;
chart_data->num_resp_code_family_2xx += wlm->resp_code_family.resp_2xx;
chart_data->num_resp_code_family_3xx += wlm->resp_code_family.resp_3xx;
chart_data->num_resp_code_family_4xx += wlm->resp_code_family.resp_4xx;
chart_data->num_resp_code_family_5xx += wlm->resp_code_family.resp_5xx;
chart_data->num_resp_code_family_other += wlm->resp_code_family.other;
memset(&wlm->resp_code_family, 0, sizeof(wlm->resp_code_family));
lgs_mng_update_chart_begin(p_file_info->chartname, "responses");
lgs_mng_update_chart_set("1xx", chart_data->num_resp_code_family_1xx);
lgs_mng_update_chart_set("2xx", chart_data->num_resp_code_family_2xx);
lgs_mng_update_chart_set("3xx", chart_data->num_resp_code_family_3xx);
lgs_mng_update_chart_set("4xx", chart_data->num_resp_code_family_4xx);
lgs_mng_update_chart_set("5xx", chart_data->num_resp_code_family_5xx);
lgs_mng_update_chart_set("other", chart_data->num_resp_code_family_other);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Response code - update */
if(p_file_info->parser_config->chart_config & CHART_RESP_CODE){
char dim_name[4];
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "detailed_responses");
for(int idx = 0; idx < RESP_CODE_ARR_SIZE - 1; idx++){
if(chart_data->num_resp_code[idx]){
snprintfz(dim_name, 4, "%d", idx + 100);
lgs_mng_update_chart_set(dim_name, chart_data->num_resp_code[idx]);
}
}
if(chart_data->num_resp_code[RESP_CODE_ARR_SIZE - 1])
lgs_mng_update_chart_set("other", chart_data->num_resp_code[RESP_CODE_ARR_SIZE - 1]);
lgs_mng_update_chart_end(sec);
}
lgs_mng_update_chart_begin(p_file_info->chartname, "detailed_responses");
for(int idx = 0; idx < RESP_CODE_ARR_SIZE - 1; idx++){
chart_data->num_resp_code[idx] += wlm->resp_code[idx];
wlm->resp_code[idx] = 0;
if(chart_data->num_resp_code[idx]){
snprintfz(dim_name, 4, "%d", idx + 100);
lgs_mng_update_chart_set(dim_name, chart_data->num_resp_code[idx]);
}
}
chart_data->num_resp_code[RESP_CODE_ARR_SIZE - 1] += wlm->resp_code[RESP_CODE_ARR_SIZE - 1];
wlm->resp_code[RESP_CODE_ARR_SIZE - 1] = 0;
if(chart_data->num_resp_code[RESP_CODE_ARR_SIZE - 1])
lgs_mng_update_chart_set("other", chart_data->num_resp_code[RESP_CODE_ARR_SIZE - 1]);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* Response code type - update */
if(p_file_info->parser_config->chart_config & CHART_RESP_CODE_TYPE){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "response_types");
lgs_mng_update_chart_set("success", chart_data->num_resp_code_type_success);
lgs_mng_update_chart_set("redirect", chart_data->num_resp_code_type_redirect);
lgs_mng_update_chart_set("bad", chart_data->num_resp_code_type_bad);
lgs_mng_update_chart_set("error", chart_data->num_resp_code_type_error);
lgs_mng_update_chart_set("other", chart_data->num_resp_code_type_other);
lgs_mng_update_chart_end(sec);
}
chart_data->num_resp_code_type_success += wlm->resp_code_type.resp_success;
chart_data->num_resp_code_type_redirect += wlm->resp_code_type.resp_redirect;
chart_data->num_resp_code_type_bad += wlm->resp_code_type.resp_bad;
chart_data->num_resp_code_type_error += wlm->resp_code_type.resp_error;
chart_data->num_resp_code_type_other += wlm->resp_code_type.other;
memset(&wlm->resp_code_type, 0, sizeof(wlm->resp_code_type));
lgs_mng_update_chart_begin(p_file_info->chartname, "response_types");
lgs_mng_update_chart_set("success", chart_data->num_resp_code_type_success);
lgs_mng_update_chart_set("redirect", chart_data->num_resp_code_type_redirect);
lgs_mng_update_chart_set("bad", chart_data->num_resp_code_type_bad);
lgs_mng_update_chart_set("error", chart_data->num_resp_code_type_error);
lgs_mng_update_chart_set("other", chart_data->num_resp_code_type_other);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* SSL protocol - update */
if(p_file_info->parser_config->chart_config & CHART_SSL_PROTO){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "ssl_protocol");
lgs_mng_update_chart_set("TLSV1", chart_data->num_ssl_proto_tlsv1);
lgs_mng_update_chart_set("TLSV1.1", chart_data->num_ssl_proto_tlsv1_1);
lgs_mng_update_chart_set("TLSV1.2", chart_data->num_ssl_proto_tlsv1_2);
lgs_mng_update_chart_set("TLSV1.3", chart_data->num_ssl_proto_tlsv1_3);
lgs_mng_update_chart_set("SSLV2", chart_data->num_ssl_proto_sslv2);
lgs_mng_update_chart_set("SSLV3", chart_data->num_ssl_proto_sslv3);
lgs_mng_update_chart_set("other", chart_data->num_ssl_proto_other);
lgs_mng_update_chart_end(sec);
}
chart_data->num_ssl_proto_tlsv1 += wlm->ssl_proto.tlsv1;
chart_data->num_ssl_proto_tlsv1_1 += wlm->ssl_proto.tlsv1_1;
chart_data->num_ssl_proto_tlsv1_2 += wlm->ssl_proto.tlsv1_2;
chart_data->num_ssl_proto_tlsv1_3 += wlm->ssl_proto.tlsv1_3;
chart_data->num_ssl_proto_sslv2 += wlm->ssl_proto.sslv2;
chart_data->num_ssl_proto_sslv3 += wlm->ssl_proto.sslv3;
chart_data->num_ssl_proto_other += wlm->ssl_proto.other;
memset(&wlm->ssl_proto, 0, sizeof(wlm->ssl_proto));
lgs_mng_update_chart_begin(p_file_info->chartname, "ssl_protocol");
lgs_mng_update_chart_set("TLSV1", chart_data->num_ssl_proto_tlsv1);
lgs_mng_update_chart_set("TLSV1.1", chart_data->num_ssl_proto_tlsv1_1);
lgs_mng_update_chart_set("TLSV1.2", chart_data->num_ssl_proto_tlsv1_2);
lgs_mng_update_chart_set("TLSV1.3", chart_data->num_ssl_proto_tlsv1_3);
lgs_mng_update_chart_set("SSLV2", chart_data->num_ssl_proto_sslv2);
lgs_mng_update_chart_set("SSLV3", chart_data->num_ssl_proto_sslv3);
lgs_mng_update_chart_set("other", chart_data->num_ssl_proto_other);
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
/* SSL cipher suite - update */
if(p_file_info->parser_config->chart_config & CHART_SSL_CIPHER){
for(time_t sec = p_file_info->parser_metrics->last_update - lag_in_sec;
sec < p_file_info->parser_metrics->last_update;
sec++){
lgs_mng_update_chart_begin(p_file_info->chartname, "ssl_cipher_suite");
for(int idx = 0; idx < chart_data->ssl_cipher_size; idx++){
lgs_mng_update_chart_set( wlm->ssl_cipher_arr.ssl_ciphers[idx].name,
chart_data->num_ssl_ciphers[idx]);
}
lgs_mng_update_chart_end(sec);
}
if(wlm->ssl_cipher_arr.size > chart_data->ssl_cipher_size){
chart_data->ssl_cipher_size = wlm->ssl_cipher_arr.size;
chart_data->num_ssl_ciphers = reallocz( chart_data->num_ssl_ciphers,
chart_data->ssl_cipher_size * sizeof(collected_number));
for(int idx = chart_data->ssl_cipher_size; idx < wlm->ssl_cipher_arr.size; idx++){
chart_data->num_ssl_ciphers[idx] = 0;
lgs_mng_add_dim_post_init( &chart_data->cs_ssl_ciphers,
wlm->ssl_cipher_arr.ssl_ciphers[idx].name,
RRD_ALGORITHM_INCREMENTAL_NAME, 1, 1);
}
chart_data->ssl_cipher_size = wlm->ssl_cipher_arr.size;
}
lgs_mng_update_chart_begin(p_file_info->chartname, "ssl_cipher_suite");
for(int idx = 0; idx < chart_data->ssl_cipher_size; idx++){
chart_data->num_ssl_ciphers[idx] += wlm->ssl_cipher_arr.ssl_ciphers[idx].count;
wlm->ssl_cipher_arr.ssl_ciphers[idx].count = 0;
lgs_mng_update_chart_set( wlm->ssl_cipher_arr.ssl_ciphers[idx].name,
chart_data->num_ssl_ciphers[idx]);
}
lgs_mng_update_chart_end(p_file_info->parser_metrics->last_update);
}
lgs_mng_do_custom_charts_update(p_file_info, lag_in_sec);
chart_data->last_update = p_file_info->parser_metrics->last_update;
}
}

View File

@ -0,0 +1,88 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file rrd_api_web_log.h
* @brief Incudes the structure and function definitions for
* the web log charts.
*/
#ifndef RRD_API_WEB_LOG_H_
#define RRD_API_WEB_LOG_H_
#include "daemon/common.h"
struct File_info;
typedef struct Chart_data_web_log chart_data_web_log_t;
#include "../file_info.h"
#include "../circular_buffer.h"
#include "rrd_api.h"
struct Chart_data_web_log {
time_t last_update;
/* Number of collected log records */
collected_number num_lines;
/* Vhosts */
struct Chart_str cs_vhosts;
collected_number *num_vhosts;
int vhost_size, vhost_size_max; /**< Actual size and maximum allocated size of dim_vhosts, num_vhosts arrays **/
/* Ports */
struct Chart_str cs_ports;
collected_number *num_ports;
int port_size, port_size_max; /**< Actual size and maximum allocated size of dim_ports, num_ports and ports arrays **/
/* IP Version */
collected_number num_ip_ver_4, num_ip_ver_6, num_ip_ver_invalid;
/* Request client current poll */
collected_number num_req_client_current_ipv4, num_req_client_current_ipv6;
/* Request client all-time */
collected_number num_req_client_all_time_ipv4, num_req_client_all_time_ipv6;
/* Request methods */
collected_number num_req_method[REQ_METHOD_ARR_SIZE];
/* Request protocol */
collected_number num_req_proto_http_1, num_req_proto_http_1_1,
num_req_proto_http_2, num_req_proto_other;
/* Request bandwidth */
collected_number num_bandwidth_req_size, num_bandwidth_resp_size;
/* Request processing time */
collected_number num_req_proc_time_min, num_req_proc_time_max, num_req_proc_time_avg;
/* Response code family */
collected_number num_resp_code_family_1xx, num_resp_code_family_2xx,
num_resp_code_family_3xx, num_resp_code_family_4xx,
num_resp_code_family_5xx, num_resp_code_family_other;
/* Response code */
collected_number num_resp_code[RESP_CODE_ARR_SIZE];
/* Response code type */
collected_number num_resp_code_type_success, num_resp_code_type_redirect,
num_resp_code_type_bad, num_resp_code_type_error, num_resp_code_type_other;
/* SSL protocol */
collected_number num_ssl_proto_tlsv1, num_ssl_proto_tlsv1_1,
num_ssl_proto_tlsv1_2, num_ssl_proto_tlsv1_3,
num_ssl_proto_sslv2, num_ssl_proto_sslv3, num_ssl_proto_other;
/* SSL cipher suite */
struct Chart_str cs_ssl_ciphers;
collected_number *num_ssl_ciphers;
int ssl_cipher_size;
};
void web_log_chart_init(struct File_info *p_file_info);
void web_log_chart_update(struct File_info *p_file_info);
#endif // RRD_API_WEB_LOG_H_

View File

@ -0,0 +1,31 @@
[global]
update every = 1
update timeout = 10
use log timestamp = auto
circular buffer max size MiB = 64
circular buffer drop logs if full = no
compression acceleration = 1
collected logs total chart enable = no
collected logs rate chart enable = yes
[db]
db mode = none
# db dir = change to use non-default path
circular buffer flush to db = 6
disk space limit MiB = 500
[forward input]
enabled = no
unix path =
unix perm = 0644
listen = 0.0.0.0
port = 24224
[fluent bit]
flush = 0.1
http listen = 0.0.0.0
http port = 2020
http server = false
# log file = change to use non-default path
log level = info
coro stack size = 24576

View File

@ -0,0 +1,434 @@
# ------------------------------------------------------------------------------
# Netdata Logs Management default configuration
# See full explanation on https://github.com/netdata/netdata/blob/master/logsmanagement/README.md
#
# To add a new log source, a new section must be added in this
# file with at least the following settings:
#
# [LOG SOURCE NAME]
# enabled = yes
# log type = flb_tail
#
# For a list of all available log types, see:
# https://github.com/netdata/netdata/blob/master/logsmanagement/README.md#types-of-available-collectors
#
# ------------------------------------------------------------------------------
[kmsg Logs]
## Example: Log collector that will collect new kernel ring buffer logs
## Required settings
enabled = yes
log type = flb_kmsg
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
use log timestamp = no
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
severity chart = yes
subsystem chart = yes
device chart = yes
## Example of capturing specific kmsg events:
# custom 1 chart = USB connect/disconnect
# custom 1 regex name = connect
# custom 1 regex = .*\bNew USB device found\b.*
# custom 2 chart = USB connect/disconnect
# custom 2 regex name = disconnect
# custom 2 regex = .*\bUSB disconnect\b.*
[Systemd Logs]
## Example: Log collector that will query journald to collect system logs
## Required settings
enabled = yes
log type = flb_systemd
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Use default path to Systemd Journal
log path = auto
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
priority value chart = yes
severity chart = yes
facility chart = yes
[Docker Events Logs]
## Example: Log collector that will monitor the Docker daemon socket and
## collect Docker event logs in a default format similar to executing
## the `sudo docker events` command.
## Required settings
enabled = yes
log type = flb_docker_events
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Use default Docker socket UNIX path: /var/run/docker.sock
log path = auto
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
event type chart = yes
event action chart = yes
## Example of how to capture create / attach / die events for a named container:
# custom 1 chart = serverA events
# custom 1 regex name = container create
# custom 1 regex = .*\bcontainer create\b.*\bname=serverA\b.*
# custom 2 chart = serverA events
# custom 2 regex name = container attach
# custom 2 regex = .*\bcontainer attach\b.*\bname=serverA\b.*
# custom 3 chart = serverA events
# custom 3 regex name = container die
# custom 3 regex = .*\bcontainer die\b.*\bname=serverA\b.*
## Stream to https://cloud.openobserve.ai/
# output 1 name = http
# output 1 URI = YOUR_API_URI
# output 1 Host = api.openobserve.ai
# output 1 Port = 443
# output 1 tls = On
# output 1 Format = json
# output 1 Json_date_key = _timestamp
# output 1 Json_date_format = iso8601
# output 1 HTTP_User = test@netdata.cloud
# output 1 HTTP_Passwd = YOUR_OPENOBSERVE_PASSWORD
# output 1 compress = gzip
## Real-time export to /tmp/docker_event_logs.csv
# output 2 name = file
# output 2 Path = /tmp
# output 2 File = docker_event_logs.csv
[Apache access.log]
## Example: Log collector that will tail Apache's access.log file and
## parse each new record to extract common web server metrics.
## Required settings
enabled = yes
log type = flb_web_log
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /var/log/apache2/access.log
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
## Auto-detect web log format, otherwise it can be set manually, e.g.:
## log format = %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i"
## see https://httpd.apache.org/docs/2.4/logs.html#accesslog
log format = auto
## Detect errors such as illegal port numbers or response codes.
verify parsed logs = yes
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
vhosts chart = yes
ports chart = yes
IP versions chart = yes
unique client IPs - current poll chart = yes
unique client IPs - all-time chart = no
http request methods chart = yes
http protocol versions chart = yes
bandwidth chart = yes
timings chart = yes
response code families chart = yes
response codes chart = yes
response code types chart = yes
SSL protocols chart = yes
SSL chipher suites chart = yes
[Nginx access.log]
## Example: Log collector that will tail Nginx's access.log file and
## parse each new record to extract common web server metrics.
## Required settings
enabled = yes
log type = flb_web_log
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /var/log/nginx/access.log
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
## see https://docs.nginx.com/nginx/admin-guide/monitoring/logging/#setting-up-the-access-log
log format = $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent $request_length $request_time "$http_referer" "$http_user_agent"
## Detect errors such as illegal port numbers or response codes.
verify parsed logs = yes
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
vhosts chart = yes
ports chart = yes
IP versions chart = yes
unique client IPs - current poll chart = yes
unique client IPs - all-time chart = no
http request methods chart = yes
http protocol versions chart = yes
bandwidth chart = yes
timings chart = yes
response code families chart = yes
response codes chart = yes
response code types chart = yes
SSL protocols chart = yes
SSL chipher suites chart = yes
[Netdata daemon.log]
## Example: Log collector that will tail Netdata's daemon.log and
## it will generate log level charts based on custom regular expressions.
## Required settings
enabled = yes
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /tmp/netdata/var/log/netdata/daemon.log
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
## Examples of extracting custom metrics from Netdata's daemon.log:
## log level chart
custom 1 chart = log level
custom 1 regex name = emergency
custom 1 regex = level=emergency
custom 1 ignore case = no
custom 2 chart = log level
custom 2 regex name = alert
custom 2 regex = level=alert
custom 2 ignore case = no
custom 3 chart = log level
custom 3 regex name = critical
custom 3 regex = level=critical
custom 3 ignore case = no
custom 4 chart = log level
custom 4 regex name = error
custom 4 regex = level=error
custom 4 ignore case = no
custom 5 chart = log level
custom 5 regex name = warning
custom 5 regex = level=warning
custom 5 ignore case = no
custom 6 chart = log level
custom 6 regex name = notice
custom 6 regex = level=notice
custom 6 ignore case = no
custom 7 chart = log level
custom 7 regex name = info
custom 7 regex = level=info
custom 7 ignore case = no
custom 8 chart = log level
custom 8 regex name = debug
custom 8 regex = level=debug
custom 8 ignore case = no
[Netdata fluentbit.log]
## Example: Log collector that will tail Netdata's
## embedded Fluent Bit's logs
## Required settings
enabled = no
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /tmp/netdata/var/log/netdata/fluentbit.log
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
## Examples of extracting custom metrics from fluentbit.log:
## log level chart
custom 1 chart = log level
custom 1 regex name = error
custom 1 regex = \[error\]
custom 1 ignore case = no
custom 2 chart = log level
custom 2 regex name = warning
custom 2 regex = \[warning\]
custom 2 ignore case = no
custom 3 chart = log level
custom 3 regex name = info
custom 3 regex = \[ info\]
custom 3 ignore case = no
custom 4 chart = log level
custom 4 regex name = debug
custom 4 regex = \[debug\]
custom 4 ignore case = no
custom 5 chart = log level
custom 5 regex name = trace
custom 5 regex = \[trace\]
custom 5 ignore case = no
[auth.log tail]
## Example: Log collector that will tail auth.log file and count
## occurences of certain `sudo` commands, using POSIX regular expressions.
## Required settings
enabled = no
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /var/log/auth.log
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
## Examples of extracting custom metrics from auth.log:
# custom 1 chart = failed su
# # custom 1 regex name =
# custom 1 regex = .*\bsu\b.*\bFAILED SU\b.*
# custom 1 ignore case = no
# custom 2 chart = sudo commands
# custom 2 regex name = sudo su
# custom 2 regex = .*\bsudo\b.*\bCOMMAND=/usr/bin/su\b.*
# custom 2 ignore case = yes
# custom 3 chart = sudo commands
# custom 3 regex name = sudo docker run
# custom 3 regex = .*\bsudo\b.*\bCOMMAND=/usr/bin/docker run\b.*
# custom 3 ignore case = yes

View File

@ -0,0 +1,90 @@
[Forward systemd]
## Example: Log collector that will collect streamed Systemd logs
## only for parsing, according to global "forward in" configuration
## found in logsmanagement.d.conf .
## Required settings
enabled = no
log type = flb_systemd
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30735
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
priority value chart = yes
severity chart = yes
facility chart = yes
[Forward Docker Events]
## Example: Log collector that will collect streamed Docker Events logs
## only for parsing, according to global "forward in" configuration
## found in logsmanagement.d.conf .
## Required settings
enabled = no
log type = flb_docker_events
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30736
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
event type chart = yes
[Forward collection]
## Example: Log collector that will collect streamed logs of any type
## according to global "forward in" configuration found in
## logsmanagement.d.conf and will also save them in the logs database.
## Required settings
enabled = no
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
db mode = full
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Streaming input settings.
log source = forward
stream guid = 6ce266f5-2704-444d-a301-2423b9d30737
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes

View File

@ -0,0 +1,28 @@
[MQTT messages]
## Example: Log collector that will create a server to listen for MQTT logs over a TCP connection.
## Required settings
enabled = no
log type = flb_mqtt
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Set up configuration specific to flb_mqtt
## see also https://docs.fluentbit.io/manual/pipeline/inputs/mqtt
# listen = 0.0.0.0
# port = 1883
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
topic chart = yes

View File

@ -0,0 +1,35 @@
[Serial logs]
## Example: Log collector that will collect logs from a serial interface.
## Required settings
enabled = no
log type = flb_serial
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Set up configuration specific to flb_serial
log path = /dev/pts/4
bitrate = 115200
min bytes = 1
# separator = X
# format = json
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
## Example of extracting custom metrics from serial interface messages:
# custom 1 chart = UART0
# # custom 1 regex name = test
# custom 1 regex = .*\bUART0\b.*
# # custom 1 ignore case = no

View File

@ -0,0 +1,142 @@
[syslog tail]
## Example: Log collector that will tail the syslog file and count
## occurences of certain keywords, using POSIX regular expressions.
## Required settings
enabled = no
log type = flb_tail
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## This section supports auto-detection of log file path if section name
## is left unchanged, otherwise it can be set manually, e.g.:
## log path = /var/log/syslog
## log path = /var/log/messages
## See README for more information on 'log path = auto' option
log path = auto
## Use inotify instead of file stat watcher. Set to 'no' to reduce CPU usage.
use inotify = yes
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
## Examples of extracting custom metrics from syslog:
# custom 1 chart = identifier
# custom 1 regex name = kernel
# custom 1 regex = .*\bkernel\b.*
# custom 1 ignore case = no
# custom 2 chart = identifier
# custom 2 regex name = systemd
# custom 2 regex = .*\bsystemd\b.*
# custom 2 ignore case = no
# custom 3 chart = identifier
# custom 3 regex name = CRON
# custom 3 regex = .*\bCRON\b.*
# custom 3 ignore case = no
# custom 3 chart = identifier
# custom 3 regex name = netdata
# custom 3 regex = .*\netdata\b.*
# custom 3 ignore case = no
[syslog Unix socket]
## Example: Log collector that will listen for RFC-3164 syslog on a UNIX
## socket that will be created on /tmp/netdata-syslog.sock .
## Required settings
enabled = no
log type = flb_syslog
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Netdata will create this socket if mode == unix_tcp or mode == unix_udp,
## please ensure the right permissions exist for this path
log path = /tmp/netdata-syslog.sock
## Ruby Regular Expression to define expected syslog format
## Please make sure <PRIVAL>, <SYSLOG_TIMESTAMP>, <HOSTNAME>, <SYSLOG_IDENTIFIER>, <PID> and <MESSAGE> are defined
## see also https://docs.fluentbit.io/manual/pipeline/parsers/regular-expression
log format = /^\<(?<PRIVAL>[0-9]+)\>(?<SYSLOG_TIMESTAMP>[^ ]* {1,2}[^ ]* [^ ]* )(?<HOSTNAME>[^ ]*) (?<SYSLOG_IDENTIFIER>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<PID>[0-9]+)\])?(?:[^\:]*\:)? *(?<MESSAGE>.*)$/
## Set up configuration specific to flb_syslog
## see also https://docs.fluentbit.io/manual/pipeline/inputs/syslog#configuration-parameters
## Modes supported are: unix_tcp, unix_udp, tcp, udp
mode = unix_udp
# listen = 0.0.0.0
# port = 5140
unix_perm = 0666
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
priority value chart = yes
severity chart = yes
facility chart = yes
[syslog TCP socket]
## Example: Log collector that will listen for RFC-3164 syslog,
## incoming via TCP on localhost IP and port 5140.
## Required settings
enabled = no
log type = flb_syslog
## Optional settings, common to all log source.
## Uncomment to override global equivalents in netdata.conf.
# update every = 1
# update timeout = 10
# use log timestamp = auto
# circular buffer max size MiB = 64
# circular buffer drop logs if full = no
# compression acceleration = 1
# db mode = none
# circular buffer flush to db = 6
# disk space limit MiB = 500
## Netdata will create this socket if mode == unix_tcp or mode == unix_udp,
## please ensure the right permissions exist for this path
# log path = /tmp/netdata-syslog.sock
## Ruby Regular Expression to define expected syslog format
## Please make sure <PRIVAL>, <SYSLOG_TIMESTAMP>, <HOSTNAME>, <SYSLOG_IDENTIFIER>, <PID> and <MESSAGE> are defined
## see also https://docs.fluentbit.io/manual/pipeline/parsers/regular-expression
log format = /^\<(?<PRIVAL>[0-9]+)\>(?<SYSLOG_TIMESTAMP>[^ ]* {1,2}[^ ]* [^ ]* )(?<HOSTNAME>[^ ]*) (?<SYSLOG_IDENTIFIER>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<PID>[0-9]+)\])?(?:[^\:]*\:)? *(?<MESSAGE>.*)$/
## Set up configuration specific to flb_syslog
## see also https://docs.fluentbit.io/manual/pipeline/inputs/syslog#configuration-parameters
## Modes supported are: unix_tcp, unix_udp, tcp, udp
mode = tcp
listen = 0.0.0.0
port = 5140
# unix_perm = 0666
## Charts to enable
# collected logs total chart enable = no
# collected logs rate chart enable = yes
priority value chart = yes
severity chart = yes
facility chart = yes

View File

@ -0,0 +1,5 @@
/tmp/netdata_log_management_stress_test_data/0.log /tmp/netdata_log_management_stress_test_data/1.log {
rotate 1
nocompress
create
}

View File

@ -0,0 +1,186 @@
<!DOCTYPE html>
<html>
<head>
<style>
#wrapper {
width: 600px;
border: 1px solid black;
overflow: hidden;
}
#form {
width: 400px;
padding: 5px;
float:left;
}
#results {
padding: 5px;
overflow: hidden;
}
table {
table-layout: fixed;
width: 80%;
border-collapse: collapse;
border: 3px solid rgb(0, 0, 0);
}
thead th:nth-child(1) {
width: 20%;
}
thead th:nth-child(2) {
width: 80%;
}
th,
td {
padding: 5px;
}
</style>
</head>
<body>
<h2>Logs management queries</h2>
<hr>
<form id="form" name="query_form" action="javascript:submit_query()"">
<h3>Query parameters</h3>
<hr>
<div id="sources"></div>
<h4>Set query parameters:</h4>
<label for="req_from">Requested from (epoch in ms):</label>
<input type="text" id="req_from" name="req_from"><br><br>
<label for="req_to">Requested to (epoch in ms):</label>
<input type="text" id="req_to" name="req_to" ><br><br>
<label for="req_quota">Quota in bytes:</label>
<input type="text" id="req_quota" name="req_quota" ><br><br>
<label for="keyword">Keyword / regex:</label>
<input type="text" id="keyword" name="keyword" ><br><br>
<label for="keyword_is_regex">Keyword is a regular expression:</label>
<input type="checkbox" id="keyword_is_regex" name="keyword_is_regex" ><br><br>
<label for="keyword_case_sensitive">Case-sensitive keyword / regex search:</label>
<input type="checkbox" id="keyword_case_sensitive" name="keyword_case_sensitive" ><br><br>
<label for="use_functions">Use functions instead of GET API:</label>
<input type="checkbox" id="use_functions" name="use_functions" ><br><br>
<input type="submit" value="Submit">
</form>
<div id="results"></div>
<script>
document.getElementById('req_from').value = '1682000000000';
document.getElementById('req_to').value = '1782000000000';
document.getElementById('req_quota').value = '10485760';
document.getElementById('keyword_is_regex').checked = true;
document.getElementById('keyword_case_sensitive').checked = false;
function populate_sources_checkboxes() {
const xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
results_obj = JSON.parse(this.responseText);
let ele = document.getElementById('sources');
ele.innerHTML += '<h4>Select log sources:</h4>';
for (const [key, value] of Object.entries(results_obj["log sources"])) {
ele.innerHTML += '<label>' + key + '</label>'
if(value["DB dir"]){
ele.innerHTML += '<input type="checkbox" name="' + key +'" class="sourceCheckBox"><br><br>';
} else {
ele.innerHTML += '<input type="checkbox" name="' + key +'" class="sourceCheckBox" disabled="disabled"> <i>(non-queryable)</i><br><br>';
}
}
}
xmlhttp.open("GET", "http://" + location.host + "/api/v1/logsmanagement_sources", true);
xmlhttp.send(null);
}
window.onload = populate_sources_checkboxes;
function submit_query() {
const xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
results_obj = JSON.parse(this.responseText);
logs_management_meta = document.getElementById('use_functions').checked ? results_obj.logs_management_meta : results_obj;
text = "<h3>Results</h3><hr>" +
"<b>API version:</b> " + logs_management_meta.api_version + "<br>" +
"<b>Requested from (epoch in ms):</b> " + logs_management_meta.requested_from + "<br>" +
"<b>Requested from:</b> " + new Date(logs_management_meta.requested_from) + "<br>" +
"<b>Requested to (epoch in ms):</b> " + logs_management_meta.requested_to + "<br>" +
"<b>Requested to:</b> " + new Date(logs_management_meta.requested_to) + "<br>" +
"<b>Actual from (epoch in ms):</b> " + logs_management_meta.actual_from + "<br>" +
"<b>Actual from:</b> " + new Date(logs_management_meta.actual_from) + "<br>" +
"<b>Actual to (epoch in ms):</b> " + logs_management_meta.actual_to + "<br>" +
"<b>Actual to:</b> " + new Date(logs_management_meta.actual_to) + "<br>" +
"<b>Requested quota:</b> " + logs_management_meta.requested_quota + " KiB<br>" +
"<b>Actual quota:</b> " + logs_management_meta.actual_quota + " KiB<br>" +
"<b>Number of distrinct log records on this page:</b> " + logs_management_meta.num_lines + "<br>" +
"<b>User time:</b> " + logs_management_meta.user_time + "<br>" +
"<b>System time:</b> " + logs_management_meta.system_time + "<br>" +
"<b>Error code:</b> " + logs_management_meta.error_code + "<br>" +
"<b>Error message:</b> " + logs_management_meta.error + "<br>";
text += "<br><table border='1'><thead><tr><th>Timestamp</th><th>Log Record</th></thead>";
results_obj.data.forEach(function(data_entry) {
datetime = new Date(data_entry[0]); // The 0 there is the key, which sets the date to the epoch
data_entry[1].forEach(function(log_entry) {
text += "<tr><td>" + datetime + "</td><td><pre style=\"white-space: pre-wrap; word-break: keep-all;\">" + log_entry + "</pre></td></tr>";
});
});
text += "</table>";
document.getElementById("results").innerHTML = text;
}
source = document.getElementById("sources");
sources_value = encodeURIComponent(sources.value);
req_from = document.getElementById("req_from");
req_from_value = encodeURIComponent(req_from.value);
req_to = document.getElementById("req_to");
req_to_value = encodeURIComponent(req_to.value);
req_quota = document.getElementById("req_quota");
req_quota_value = encodeURIComponent(req_quota.value);
keyword = document.getElementById("keyword");
keyword_value = encodeURIComponent(keyword.value);
sanitize_keyword = document.getElementById('keyword_is_regex').checked ? "0" : "1";
ignore_case = document.getElementById('keyword_case_sensitive').checked ? "0" : "1";
if(document.getElementById('use_functions').checked){
xmlhttp_req = "http://" + location.host + "/api/v1/function?function=logsmanagement" +
" from:" + req_from_value +
" to:" + req_to_value +
" quota:" + req_quota_value +
" keyword:" + keyword_value +
" sanitize_keyword: " + sanitize_keyword +
" ignore_case:" + ignore_case;
Array.from(document.getElementsByClassName("sourceCheckBox")).forEach(
function(element, index, array) {
if(element.checked){
xmlhttp_req += ' chartname:"' + element.name + '"';
}
}
);
} else {
xmlhttp_req = "http://" + location.host + "/api/v1/logsmanagement?" +
"from=" + req_from_value +
"&to=" + req_to_value +
"&quota=" + req_quota_value +
"&keyword=" + keyword_value +
"&sanitize_keyword=" + sanitize_keyword +
"&ignore_case=" + ignore_case;
Array.from(document.getElementsByClassName("sourceCheckBox")).forEach(
function(element, index, array) {
if(element.checked){
xmlhttp_req += "&chartname=" + element.name;
}
}
);
}
console.log("Query:\n" + xmlhttp_req);
xmlhttp.open("GET", xmlhttp_req, true);
xmlhttp.send(null);
}
</script>
</body>
</html>

View File

@ -0,0 +1,145 @@
#!/bin/bash
# Default configuration options
DEFAULT_BUILD_CLEAN_NETDATA=0
DEFAULT_BUILD_FOR_RELEASE=1
DEFAULT_NUM_LOG_SOURCES=0
DEFAULT_DELAY_BETWEEN_MSG_WRITE=1000000
DEFAULT_TOTAL_MSGS_PER_SOURCE=1000000
DEFAULT_QUERIES_DELAY=3600
DEFAULT_LOG_ROTATE_AFTER_SEC=3600
DEFAULT_DELAY_OPEN_TO_WRITE_SEC=6
DEFAULT_RUN_LOGS_MANAGEMENT_TESTS_ONLY=0
if [ "$1" == "-h" ] || [ "$1" == "--help" ]; then
echo "Usage: $(basename "$0") [ARGS]..."
echo "Example: $(basename "$0") 0 1 2 1000 1000000 10 6 6 0"
echo "Build, install and run netdata with logs management "
echo "functionality enabled and (optional) stress tests."
echo ""
echo "arg[1]: [build_clean_netdata] Default: $DEFAULT_BUILD_CLEAN_NETDATA"
echo "arg[2]: [build_for_release] Default: $DEFAULT_BUILD_FOR_RELEASE"
echo "arg[3]: [num_log_sources] Default: $DEFAULT_NUM_LOG_SOURCES"
echo "arg[4]: [delay_between_msg_write] Default: $DEFAULT_DELAY_BETWEEN_MSG_WRITE us"
echo "arg[5]: [total_msgs_per_source] Default: $DEFAULT_TOTAL_MSGS_PER_SOURCE"
echo "arg[6]: [queries_delay] Default: $DEFAULT_QUERIES_DELAY s"
echo "arg[7]: [log_rotate_after_sec] Default: $DEFAULT_LOG_ROTATE_AFTER_SEC s"
echo "arg[8]: [delay_open_to_write_sec] Default: $DEFAULT_DELAY_OPEN_TO_WRITE_SEC s"
echo "arg[9]: [run_logs_management_tests_only] Default: $DEFAULT_RUN_LOGS_MANAGEMENT_TESTS_ONLY"
exit 0
fi
build_clean_netdata="${1:-$DEFAULT_BUILD_CLEAN_NETDATA}"
build_for_release="${2:-$DEFAULT_BUILD_FOR_RELEASE}"
num_log_sources="${3:-$DEFAULT_NUM_LOG_SOURCES}"
delay_between_msg_write="${4:-$DEFAULT_DELAY_BETWEEN_MSG_WRITE}"
total_msgs_per_source="${5:-$DEFAULT_TOTAL_MSGS_PER_SOURCE}"
queries_delay="${6:-$DEFAULT_QUERIES_DELAY}"
log_rotate_after_sec="${7:-$DEFAULT_LOG_ROTATE_AFTER_SEC}"
delay_open_to_write_sec="${8:-$DEFAULT_DELAY_OPEN_TO_WRITE_SEC}"
run_logs_management_tests_only="${9:-$DEFAULT_RUN_LOGS_MANAGEMENT_TESTS_ONLY}"
if [ "$num_log_sources" -le 0 ]
then
enable_stress_tests=0
else
enable_stress_tests=1
fi
INSTALL_PATH="/tmp"
# Terminate running processes
sudo killall -s KILL netdata
sudo killall -s KILL stress_test
sudo killall -s KILL -u netdata
# Remove potentially persistent directories and files
sudo rm -f $INSTALL_PATH/netdata/var/log/netdata/error.log
sudo rm -rf $INSTALL_PATH/netdata/var/cache/netdata/logs_management_db
sudo rm -rf $INSTALL_PATH/netdata_log_management_stress_test_data
CPU_CORES=$(grep ^cpu\\scores /proc/cpuinfo | uniq | awk '{print $4}')
# Build or rebuild Netdata
if [ "$build_clean_netdata" -eq 1 ]
then
cd ../..
sudo $INSTALL_PATH/netdata/usr/libexec/netdata/netdata-uninstaller.sh -y -f -e $INSTALL_PATH/netdata/etc/netdata/.environment
sudo rm -rf $INSTALL_PATH/netdata/etc/netdata # Remove /etc/netdata if it persists for some reason
sudo git clean -dxff && git submodule update --init --recursive --force
if [ "$build_for_release" -eq 0 ]
then
c_flags="-O1 -ggdb -Wall -Wextra "
c_flags+="-fno-omit-frame-pointer -Wformat-signedness -fstack-protector-all -Wformat-truncation=2 -Wunused-result "
c_flags+="-DNETDATA_INTERNAL_CHECKS=1 -DNETDATA_DEV_MODE=1 -DLOGS_MANAGEMENT_STRESS_TEST=$enable_stress_tests "
# c_flags+="-Wl,--no-as-needed -ldl "
sudo CFLAGS="$c_flags" ./netdata-installer.sh \
--dont-start-it \
--dont-wait \
--disable-lto \
--disable-telemetry \
--disable-go \
--disable-ebpf \
--disable-ml \
--enable-logsmanagement-tests \
--install-prefix $INSTALL_PATH
else
c_flags="-DLOGS_MANAGEMENT_STRESS_TEST=$enable_stress_tests "
# c_flags+="-Wl,--no-as-needed -ldl "
sudo CFLAGS="$c_flags" ./netdata-installer.sh \
--dont-start-it \
--dont-wait \
--disable-telemetry \
--install-prefix $INSTALL_PATH
fi
sudo cp logsmanagement/stress_test/logs_query.html "$INSTALL_PATH/netdata/usr/share/netdata/web"
sudo chown -R netdata:netdata "$INSTALL_PATH/netdata/usr/share/netdata/web/logs_query.html"
else
cd ../.. && sudo make -j"$CPU_CORES" || exit 1 && sudo make install
sudo chown -R netdata:netdata "$INSTALL_PATH/netdata/usr/share/netdata/web"
fi
cd logsmanagement/stress_test || exit
if [ "$run_logs_management_tests_only" -eq 0 ]
then
# Rebuild and run stress test
if [ "$num_log_sources" -gt 0 ]
then
sudo -u netdata -g netdata mkdir $INSTALL_PATH/netdata_log_management_stress_test_data
gcc stress_test.c -DNUM_LOG_SOURCES="$num_log_sources" \
-DDELAY_BETWEEN_MSG_WRITE="$delay_between_msg_write" \
-DTOTAL_MSGS_PER_SOURCE="$total_msgs_per_source" \
-DQUERIES_DELAY="$queries_delay" \
-DLOG_ROTATE_AFTER_SEC="$log_rotate_after_sec" \
-DDELAY_OPEN_TO_WRITE_SEC="$delay_open_to_write_sec" \
-luv -Og -g -o stress_test
sudo -u netdata -g netdata ./stress_test &
sleep 1
fi
# Run Netdata
if [ "$build_for_release" -eq 0 ]
then
sudo -u netdata -g netdata -s gdb -ex="set confirm off" -ex=run --args $INSTALL_PATH/netdata/usr/sbin/netdata -D
elif [ "$build_for_release" -eq 2 ]
then
sudo -u netdata -g netdata -s gdb -ex="set confirm off" -ex=run --args $INSTALL_PATH/netdata/usr/libexec/netdata/plugins.d/logs-management.plugin
else
sudo -u netdata -g netdata ASAN_OPTIONS=log_path=stdout $INSTALL_PATH/netdata/usr/sbin/netdata -D
fi
else
if [[ $($INSTALL_PATH/netdata/usr/sbin/netdata -W buildinfo | grep -Fc DLOGS_MANAGEMENT_STRESS_TEST) -eq 1 ]]
then
sudo -u netdata -g netdata ASAN_OPTIONS=log_path=/dev/null $INSTALL_PATH/netdata/usr/libexec/netdata/plugins.d/logs-management.plugin --unittest
else
echo "======================================================================="
echo "run_logs_management_tests_only=1 but logs management tests cannot run."
echo "Netdata must be configured with --enable-logsmanagement-tests."
echo "Please rerun script with build_clean_netdata=1 and build_for_release=0."
echo "======================================================================="
fi
fi

View File

@ -0,0 +1,386 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file stress_test.c
* @brief Black-box stress testing of Netdata Logs Management
*/
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>
#include <uv.h>
#include "../defaults.h"
#include "stress_test.h"
#define SIMULATED_LOGS_DIR "/tmp/netdata_log_management_stress_test_data"
#define LOG_ROTATION_CMD "logrotate --force logrotate.conf -s /tmp/netdata_log_management_stress_test_data/logrotate_status"
#define CSV_DELIMITER " "
#define USE_LTSV_FORMAT 0
#define MS_IN_S 1000
#define US_IN_S 1000000
#define NO_OF_FIELDS 10
#ifdef _WIN32
# define PIPENAME "\\\\?\\pipe\\netdata-logs-stress-test"
#else
# define PIPENAME "/tmp/netdata-logs-stress-test"
#endif // _WIN32
uv_process_t child_req;
uv_process_options_t options;
size_t max_msg_len;
static int log_files_no;
static volatile int log_rotated = 0;
static char **all_fields_arr[NO_OF_FIELDS];
static int all_fields_arr_sizes[NO_OF_FIELDS];
static char *vhosts_ports[] = {
"testhost.host:17",
"invalidhost&%$:80",
"testhost12.host:80",
"testhost57.host:19999",
"testhost111.host:77777",
NULL
};
static char *vhosts[] = {
"testhost.host",
"invalidhost&%$",
"testhost12.host",
"testhost57.host",
"testhost111.host",
NULL
};
static char *ports[] = {
"17",
"80",
"123",
"8080",
"19999",
"77777",
NULL
};
static char *req_clients[] = {
"192.168.15.14",
"192.168.2.1",
"188.133.132.15",
"156.134.132.15",
"2001:0db8:85a3:0000:0000:8a2e:0370:7334",
"8501:0ab8:85a3:0000:0000:4a5d:0370:5213",
"garbageAddress",
NULL
};
static char *req_methods[] = {
"GET",
"POST",
"UPDATE",
"DELETE",
"PATCH",
"PUT",
"INVALIDMETHOD",
NULL
};
static char *resp_codes[] = {
"5",
"200",
"202",
"404",
"410",
"1027",
NULL
};
static char *req_protos[] = {
"HTTP/1",
"HTTP/1.0",
"HTTP/2",
"HTTP/3",
NULL
};
static char *req_sizes[] = {
"236",
"635",
"954",
"-",
NULL
};
static char *resp_sizes[] = {
"128",
"452",
"1056",
"-",
NULL
};
static char *ssl_protos[] = {
"TLSv1",
"TLSv1.1",
"TLSv1.2",
"TLSv1.3",
"SSLv3",
"-",
NULL
};
static char *ssl_ciphers[] = {
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_PSK_WITH_AES_128_CCM_8",
"ECDHE-RSA-AES128-GCM-SHA256",
"TLS_RSA_WITH_DES_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
"invalid_SSL_cipher_suite",
"invalidSSLCipher",
NULL
};
// "host:testhost.host\tport:80\treq_client:192.168.15.14\treq_method:\"GET\"\tresp_code:202\treq_proto:HTTP/1\treq_size:635\tresp_size:-\tssl_proto:TLSv1\tssl_cipher:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
// TODO: Include query.h instead of copy-pasting
typedef struct db_query_params {
msec_t start_timestamp;
msec_t end_timestamp;
char *filename;
char *keyword;
char *results;
size_t results_size;
} logs_query_params_t;
size_t get_local_time(char *buf, size_t max_buf_size){
time_t rawtime;
struct tm *info;
time( &rawtime );
#if USE_LTSV_FORMAT
return strftime (buf, max_buf_size, "time:[%d/%b/%Y:%H:%M:%S %z]",localtime( &rawtime ));
#else
return strftime (buf, max_buf_size, "[%d/%b/%Y:%H:%M:%S %z]",localtime( &rawtime ));
#endif
}
static void produce_logs(void *arg) {
msec_t runtime;
msec_t start_time = now_realtime_msec();
int log_no = *((int *)arg);
int rc = 0;
long int msgs_written = 0;
uv_file file_handle;
uv_buf_t uv_buf;
char *buf = malloc(max_msg_len + 100);
size_t buf_size;
uv_fs_t write_req;
uv_loop_t loop;
uv_loop_init(&loop);
char log_filename[100];
sprintf(log_filename, "%s/%d.log", SIMULATED_LOGS_DIR, log_no);
uv_fs_t open_req;
rc = uv_fs_open(&loop, &open_req, log_filename, O_WRONLY | O_CREAT | O_TRUNC, 0777, NULL);
if (rc < 0) {
fprintf(stderr, "[STRESS_TEST] file_open() error: %s (%d) %s\n", log_filename, rc, uv_strerror(rc));
} else {
fprintf(stderr, "[STRESS_TEST] Opened file: %s\n", log_filename);
file_handle = open_req.result; // open_req->result of a uv_fs_t is the file descriptor in case of the uv_fs_open
}
uv_fs_req_cleanup(&open_req);
sleep(DELAY_OPEN_TO_WRITE_SEC);
fprintf(stderr, "[STRESS_TEST] Start logging: %s\n", log_filename);
int applied_close_open = 0;
while (msgs_written < TOTAL_MSGS_PER_SOURCE) {
size_t msg_timestamp_len = 50;
msg_timestamp_len = get_local_time(buf, msg_timestamp_len);
buf_size = msg_timestamp_len;
for(int i = 0; i < NO_OF_FIELDS; i++){
strcpy(&buf[buf_size++], CSV_DELIMITER);
int arr_item_off = rand() % all_fields_arr_sizes[i];
size_t arr_item_len = strlen(all_fields_arr[i][arr_item_off]);
memcpy(&buf[buf_size], all_fields_arr[i][arr_item_off], arr_item_len);
buf_size += arr_item_len;
}
buf[buf_size] = '\n';
uv_buf = uv_buf_init(buf, buf_size + 1);
uv_fs_write(&loop, &write_req, file_handle, &uv_buf, 1, -1, NULL);
msgs_written++;
if(!(msgs_written % 1000000)) fprintf(stderr, "[STRESS_TEST] Wrote %" PRId64 " messages to %s\n", msgs_written, log_filename);
if(log_rotated && !applied_close_open) {
uv_fs_t close_req;
rc = uv_fs_close(&loop, &close_req, file_handle, NULL);
if(rc) {
fprintf(stderr, "[STRESS_TEST] file_close() error: %s (%d) %s\n", log_filename, rc, uv_strerror(rc));
assert(0);
}
uv_fs_req_cleanup(&close_req);
rc = uv_fs_open(&loop, &open_req, log_filename, O_WRONLY | O_CREAT | O_TRUNC , 0777, NULL);
if (rc < 0) {
fprintf(stderr, "[STRESS_TEST] file_open() error: %s (%d) %s\n", log_filename, rc, uv_strerror(rc));
assert(0);
} else {
fprintf(stderr, "[STRESS_TEST] Rotated file: %s\n", log_filename);
file_handle = open_req.result; // open_req->result of a uv_fs_t is the file descriptor in case of the uv_fs_open
}
uv_fs_req_cleanup(&open_req);
applied_close_open = 1;
fflush(stderr);
}
#if DELAY_BETWEEN_MSG_WRITE /**< Sleep delay (in us) in between consequent messages writes to a file **/
usleep(DELAY_BETWEEN_MSG_WRITE);
#endif
}
runtime = now_realtime_msec() - start_time - DELAY_OPEN_TO_WRITE_SEC * MS_IN_S;
fprintf(stderr, "[STRESS_TEST] It took %" PRIu64 "ms to write %" PRId64 " log records in %s (%" PRId64 "k msgs/s))\n. ",
runtime, msgs_written, log_filename, msgs_written / runtime);
}
static void log_rotate(void *arg){
uv_sleep((DELAY_OPEN_TO_WRITE_SEC + LOG_ROTATE_AFTER_SEC) * MS_IN_S);
assert(system(LOG_ROTATION_CMD) != -1);
log_rotated = 1;
fprintf(stderr, "[STRESS_TEST] Rotate log sources\n");
fflush(stderr);
}
static void connect_cb(uv_connect_t* req, int status){
int rc = 0;
if(status < 0){
fprintf(stderr, "[STRESS_TEST] Failed to connect to pipe!\n");
exit(-1);
}
else
fprintf(stderr, "[STRESS_TEST] Connection to pipe successful!\n");
uv_write_t write_req;
write_req.data = req->handle;
// Serialise logs_query_params_t
char *buf = calloc(100 * log_files_no, sizeof(char));
sprintf(buf, "%d", log_files_no);
for(int i = 0; i < log_files_no ; i++){
sprintf(&buf[strlen(buf)], ",0,2147483646000," SIMULATED_LOGS_DIR "/%d.log,%s,%zu", i, " ", (size_t) MAX_LOG_MSG_SIZE);
}
fprintf(stderr, "[STRESS_TEST] Serialised DB query params: %s\n", buf);
// Write to pipe
uv_buf_t uv_buf = uv_buf_init(buf, strlen(buf));
rc = uv_write(&write_req, (uv_stream_t *) req->handle, &uv_buf, 1, NULL);
if (rc) {
fprintf(stderr, "[STRESS_TEST] uv_write() error: %s\n", uv_strerror(rc));
uv_close((uv_handle_t *) req->handle, NULL);
exit(-1);
}
#if 1
uv_shutdown_t shutdown_req;
rc = uv_shutdown(&shutdown_req, (uv_stream_t *) req->handle, NULL);
if (rc) {
fprintf(stderr, "[STRESS_TEST] uv_shutdown() error: %s\n", uv_strerror(rc));
uv_close((uv_handle_t *) req->handle, NULL);
exit(-1);
}
#endif
}
int main(int argc, const char *argv[]) {
fprintf(stdout, "*****************************************************************************\n"
"%-15s %40s\n",
"* [STRESS_TEST] Starting stress_test", "*");
srand(time(NULL));
all_fields_arr[0] = vhosts;
all_fields_arr[1] = ports;
all_fields_arr[2] = req_clients;
all_fields_arr[3] = req_methods;
all_fields_arr[4] = resp_codes;
all_fields_arr[5] = req_protos;
all_fields_arr[6] = req_sizes;
all_fields_arr[7] = resp_sizes;
all_fields_arr[8] = ssl_protos;
all_fields_arr[9] = ssl_ciphers;
for (int i = 0; i < NO_OF_FIELDS; i++){
char **arr = all_fields_arr[i];
int arr_size = 0;
size_t max_item_len = 0;
while(arr[arr_size] != NULL){
size_t item_len = strlen(arr[arr_size]);
if(item_len > max_item_len) max_item_len = item_len;
arr_size++;
}
max_msg_len += max_item_len;
all_fields_arr_sizes[i] = arr_size;
}
char *ptr;
log_files_no = NUM_LOG_SOURCES;
fprintf(stdout, "*****************************************************************************\n"
"%-15s%42s %-10u%9s\n"
"%-15s%42s %-10u%9s\n"
"%-15s%42s %-10u%9s\n"
"%-15s%42s %-10u%9s\n"
"%-15s%42s %-10u%9s\n"
"%-15s%42s %-10u%9s\n"
"*****************************************************************************\n",
"* [STRESS_TEST]", "Number of log sources to simulate:", log_files_no, "file *",
"* [STRESS_TEST]", "Total log records to produce per source:", TOTAL_MSGS_PER_SOURCE, "records *",
"* [STRESS_TEST]", "Delay between log record write to file:", DELAY_BETWEEN_MSG_WRITE, "us *",
"* [STRESS_TEST]", "Log sources to rotate via create after:", LOG_ROTATE_AFTER_SEC, "s *",
"* [STRESS_TEST]", "Queries to be executed after:", QUERIES_DELAY, "s *",
"* [STRESS_TEST]", "Delay until start writing logs:", DELAY_OPEN_TO_WRITE_SEC, "s *");
/* Start threads that produce log messages */
uv_thread_t *log_producer_threads = malloc(log_files_no * sizeof(uv_thread_t));
int *log_producer_thread_no = malloc(log_files_no * sizeof(int));
for (int i = 0; i < log_files_no; i++) {
fprintf(stderr, "[STRESS_TEST] Starting up log producer for %d.log\n", i);
log_producer_thread_no[i] = i;
assert(!uv_thread_create(&log_producer_threads[i], produce_logs, &log_producer_thread_no[i]));
}
uv_thread_t *log_rotate_thread = malloc(sizeof(uv_thread_t));
assert(!uv_thread_create(log_rotate_thread, log_rotate, NULL));
for (int j = 0; j < log_files_no; j++) {
uv_thread_join(&log_producer_threads[j]);
}
sleep(QUERIES_DELAY); // Give netdata-logs more than LOG_FILE_READ_INTERVAL to ensure the entire log file has been read.
uv_pipe_t query_data_pipe;
uv_pipe_init(uv_default_loop(), &query_data_pipe, 1);
uv_connect_t connect_req;
uv_pipe_connect(&connect_req, &query_data_pipe, PIPENAME, connect_cb);
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
uv_close((uv_handle_t *) &query_data_pipe, NULL);
}

View File

@ -0,0 +1,780 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file unit_test.h
* @brief Includes unit tests for the Logs Management project
*/
#include "unit_test.h"
#include <stdlib.h>
#include <stdio.h>
#define __USE_XOPEN_EXTENDED
#include <ftw.h>
#include <unistd.h>
#include "../circular_buffer.h"
#include "../helper.h"
#include "../logsmanag_config.h"
#include "../parser.h"
#include "../query.h"
#include "../db_api.h"
static int old_stdout = STDOUT_FILENO;
static int old_stderr = STDERR_FILENO;
#define SUPRESS_STDX(stream_no) \
{ \
if(stream_no == STDOUT_FILENO) \
old_stdout = dup(old_stdout); \
else \
old_stderr = dup(old_stderr); \
if(!freopen("/dev/null", "w", stream_no == STDOUT_FILENO ? stdout : stderr)) \
exit(-1); \
}
#define UNSUPRESS_STDX(stream_no) \
{ \
fclose(stream_no == STDOUT_FILENO ? stdout : stderr); \
if(stream_no == STDOUT_FILENO) \
stdout = fdopen(old_stdout, "w"); \
else \
stderr = fdopen(old_stderr, "w"); \
}
#define SUPRESS_STDOUT() SUPRESS_STDX(STDOUT_FILENO)
#define SUPRESS_STDERR() SUPRESS_STDX(STDERR_FILENO)
#define UNSUPRESS_STDOUT() UNSUPRESS_STDX(STDOUT_FILENO)
#define UNSUPRESS_STDERR() UNSUPRESS_STDX(STDERR_FILENO)
#define LOG_RECORDS_PARTIAL "\
127.0.0.1 - - [30/Jun/2022:16:43:51 +0300] \"GET / HTTP/1.0\" 200 11192 \"-\" \"ApacheBench/2.3\"\n\
192.168.2.1 - - [30/Jun/2022:16:43:51 +0300] \"PUT / HTTP/1.0\" 400 11192 \"-\" \"ApacheBench/2.3\"\n\
255.91.204.202 - mann1475 [30/Jun/2023:21:05:09 +0000] \"POST /vertical/turn-key/engineer/e-enable HTTP/1.0\" 401 11411\n\
91.126.60.234 - ritchie4302 [30/Jun/2023:21:05:09 +0000] \"PATCH /empower/interfaces/deploy HTTP/2.0\" 404 29063\n\
120.134.242.160 - runte5364 [30/Jun/2023:21:05:09 +0000] \"GET /visualize/enterprise/optimize/embrace HTTP/1.0\" 400 10637\n\
61.134.57.25 - - [30/Jun/2023:21:05:09 +0000] \"HEAD /metrics/optimize/bandwidth HTTP/1.1\" 200 26713\n\
18.90.118.50 - - [30/Jun/2023:21:05:09 +0000] \"PATCH /methodologies/extend HTTP/2.0\" 205 15708\n\
21.174.251.223 - zulauf8852 [30/Jun/2023:21:05:09 +0000] \"POST /proactive HTTP/2.0\" 100 9456\n\
20.217.190.46 - - [30/Jun/2023:21:05:09 +0000] \"GET /mesh/frictionless HTTP/1.1\" 301 3153\n\
130.43.250.80 - hintz5738 [30/Jun/2023:21:05:09 +0000] \"PATCH /e-markets/supply-chains/mindshare HTTP/2.0\" 401 13039\n\
222.36.95.121 - pouros3514 [30/Jun/2023:21:05:09 +0000] \"DELETE /e-commerce/scale/customized/best-of-breed HTTP/1.0\" 406 8304\n\
133.117.9.29 - hoeger7673 [30/Jun/2023:21:05:09 +0000] \"PUT /extensible/maximize/visualize/bricks-and-clicks HTTP/1.0\" 403 17067\n\
65.145.39.136 - heathcote3368 [30/Jun/2023:21:05:09 +0000] \"DELETE /technologies/iterate/viral HTTP/1.1\" 501 29982\n\
153.132.199.122 - murray8217 [30/Jun/2023:21:05:09 +0000] \"PUT /orchestrate/visionary/visualize HTTP/1.1\" 500 12705\n\
140.149.178.196 - hickle8613 [30/Jun/2023:21:05:09 +0000] \"PATCH /drive/front-end/infomediaries/maximize HTTP/1.1\" 406 20179\n\
237.31.189.207 - - [30/Jun/2023:21:05:09 +0000] \"GET /bleeding-edge/recontextualize HTTP/1.1\" 406 24815\n\
210.217.232.107 - - [30/Jun/2023:21:05:09 +0000] \"POST /redefine/next-generation/relationships/intuitive HTTP/2.0\" 205 14028\n\
121.2.189.119 - marvin5528 [30/Jun/2023:21:05:09 +0000] \"PUT /sexy/innovative HTTP/2.0\" 204 10689\n\
120.13.121.164 - jakubowski1027 [30/Jun/2023:21:05:09 +0000] \"PUT /sexy/initiatives/morph/eyeballs HTTP/1.0\" 502 22287\n\
28.229.107.175 - wilderman8830 [30/Jun/2023:21:05:09 +0000] \"PATCH /visionary/best-of-breed HTTP/1.1\" 503 6010\n\
210.147.186.50 - - [30/Jun/2023:21:05:09 +0000] \"PUT /paradigms HTTP/2.0\" 501 18054\n\
185.157.236.127 - - [30/Jun/2023:21:05:09 +0000] \"GET /maximize HTTP/1.0\" 400 13650\n\
236.90.19.165 - - [30/Jun/2023:21:23:34 +0000] \"GET /next-generation/user-centric/24%2f365 HTTP/1.0\" 400 5212\n\
233.182.111.100 - torphy3512 [30/Jun/2023:21:23:34 +0000] \"PUT /seamless/incentivize HTTP/1.0\" 304 27750\n\
80.185.129.193 - - [30/Jun/2023:21:23:34 +0000] \"HEAD /strategic HTTP/1.1\" 502 6146\n\
182.145.92.52 - - [30/Jun/2023:21:23:34 +0000] \"PUT /dot-com/grow/networks HTTP/1.0\" 301 1763\n\
46.14.122.16 - - [30/Jun/2023:21:23:34 +0000] \"HEAD /deliverables HTTP/1.0\" 301 7608\n\
162.111.143.158 - bruen3883 [30/Jun/2023:21:23:34 +0000] \"POST /extensible HTTP/2.0\" 403 22752\n\
201.13.111.255 - hilpert8768 [30/Jun/2023:21:23:34 +0000] \"PATCH /applications/engage/frictionless/content HTTP/1.0\" 406 24866\n\
76.90.243.15 - - [30/Jun/2023:21:23:34 +0000] \"PATCH /24%2f7/seamless/target/enable HTTP/1.1\" 503 8176\n\
187.79.114.48 - - [30/Jun/2023:21:23:34 +0000] \"GET /synergistic HTTP/1.0\" 503 14251\n\
59.52.178.62 - kirlin3704 [30/Jun/2023:21:23:34 +0000] \"POST /web-readiness/grow/evolve HTTP/1.0\" 501 13305\n\
27.46.78.167 - - [30/Jun/2023:21:23:34 +0000] \"PATCH /interfaces/schemas HTTP/2.0\" 100 4860\n\
191.9.15.43 - goodwin7310 [30/Jun/2023:21:23:34 +0000] \"POST /engage/innovate/web-readiness/roi HTTP/2.0\" 404 4225\n\
195.153.126.148 - klein8350 [30/Jun/2023:21:23:34 +0000] \"DELETE /killer/synthesize HTTP/1.0\" 204 15134\n\
162.207.64.184 - mayert4426 [30/Jun/2023:21:23:34 +0000] \"HEAD /intuitive/vertical/incentivize HTTP/1.0\" 204 23666\n\
185.96.7.205 - - [30/Jun/2023:21:23:34 +0000] \"DELETE /communities/deliver/user-centric HTTP/1.0\" 416 18210\n\
187.180.105.55 - - [30/Jun/2023:21:23:34 +0000] \"POST /customized HTTP/2.0\" 200 1396\n\
216.82.243.54 - kunze7200 [30/Jun/2023:21:23:34 +0000] \"PUT /e-tailers/evolve/leverage/engage HTTP/2.0\" 504 1665\n\
170.128.69.228 - - [30/Jun/2023:21:23:34 +0000] \"DELETE /matrix/open-source/proactive HTTP/1.0\" 301 18326\n\
253.200.84.66 - steuber5220 [30/Jun/2023:21:23:34 +0000] \"POST /benchmark/experiences HTTP/1.1\" 504 18944\n\
28.240.40.161 - - [30/Jun/2023:21:23:34 +0000] \"PATCH /initiatives HTTP/1.0\" 500 6500\n\
134.163.236.75 - - [30/Jun/2023:21:23:34 +0000] \"HEAD /platforms/recontextualize HTTP/1.0\" 203 22188\n\
241.64.230.66 - - [30/Jun/2023:21:23:34 +0000] \"GET /cutting-edge/methodologies/b2c/cross-media HTTP/1.1\" 403 20698\n\
210.216.183.157 - okuneva6218 [30/Jun/2023:21:23:34 +0000] \"POST /generate/incentivize HTTP/2.0\" 403 25900\n\
164.219.134.242 - - [30/Jun/2023:21:23:34 +0000] \"HEAD /efficient/killer/whiteboard HTTP/2.0\" 501 22081\n\
173.156.54.99 - harvey6165 [30/Jun/2023:21:23:34 +0000] \"HEAD /dynamic/cutting-edge/sexy/user-centric HTTP/2.0\" 200 2995\n\
215.242.74.14 - - [30/Jun/2023:21:23:34 +0000] \"PUT /roi HTTP/1.0\" 204 9674\n\
133.77.49.187 - lockman3141 [30/Jun/2023:21:23:34 +0000] \"PUT /mindshare/transition HTTP/2.0\" 503 2726\n\
159.77.190.255 - - [30/Jun/2023:21:23:34 +0000] \"DELETE /world-class/bricks-and-clicks HTTP/1.1\" 501 21712\n\
65.6.237.113 - - [30/Jun/2023:21:23:34 +0000] \"PATCH /e-enable HTTP/2.0\" 405 11865\n\
194.76.211.16 - champlin6280 [30/Jun/2023:21:23:34 +0000] \"PUT /applications/redefine/eyeballs/mindshare HTTP/1.0\" 302 27679\n\
96.206.219.202 - - [30/Jun/2023:21:23:34 +0000] \"PUT /solutions/mindshare/vortals/transition HTTP/1.0\" 403 7385\n\
255.80.116.201 - hintz8162 [30/Jun/2023:21:23:34 +0000] \"POST /frictionless/e-commerce HTTP/1.0\" 302 9235\n\
89.66.165.183 - smith2655 [30/Jun/2023:21:23:34 +0000] \"HEAD /markets/synergize HTTP/2.0\" 501 28055\n\
39.210.168.14 - - [30/Jun/2023:21:23:34 +0000] \"GET /integrate/killer/end-to-end/infrastructures HTTP/1.0\" 302 11311\n\
173.99.112.210 - - [30/Jun/2023:21:23:34 +0000] \"GET /interfaces HTTP/2.0\" 503 1471\n\
108.4.157.6 - morissette1161 [30/Jun/2023:21:23:34 +0000] \"POST /mesh/convergence HTTP/1.1\" 403 18708\n\
174.160.107.162 - - [30/Jun/2023:21:23:34 +0000] \"POST /vortals/monetize/utilize/synergistic HTTP/1.1\" 302 13252\n\
188.8.105.56 - beatty6880 [30/Jun/2023:21:23:34 +0000] \"POST /web+services/innovate/generate/leverage HTTP/1.1\" 301 29856\n\
115.179.64.255 - - [30/Jun/2023:21:23:34 +0000] \"PATCH /transform/transparent/b2c/holistic HTTP/1.1\" 406 10208\n\
48.104.215.32 - - [30/Jun/2023:21:23:34 +0000] \"DELETE /drive/clicks-and-mortar HTTP/1.0\" 501 13752\n\
75.212.115.12 - pfannerstill5140 [30/Jun/2023:21:23:34 +0000] \"PATCH /leading-edge/mesh/methodologies HTTP/1.0\" 503 4946\n\
52.75.2.117 - osinski2030 [30/Jun/2023:21:23:34 +0000] \"PUT /incentivize/recontextualize HTTP/1.1\" 301 8785\n"
#define LOG_RECORD_WITHOUT_NEW_LINE \
"82.39.169.93 - streich5722 [30/Jun/2023:21:23:34 +0000] \"GET /action-items/leading-edge/reinvent/maximize HTTP/1.1\" 500 1228"
#define LOG_RECORDS_WITHOUT_TERMINATING_NEW_LINE \
LOG_RECORDS_PARTIAL \
LOG_RECORD_WITHOUT_NEW_LINE
#define LOG_RECORD_WITH_NEW_LINE \
"131.128.33.109 - turcotte6735 [30/Jun/2023:21:23:34 +0000] \"PUT /distributed/strategize HTTP/1.1\" 401 16471\n"
#define LOG_RECORDS_WITH_TERMINATING_NEW_LINE \
LOG_RECORDS_PARTIAL \
LOG_RECORD_WITH_NEW_LINE
static int test_compression_decompression() {
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
Circ_buff_item_t item;
item.text_size = sizeof(LOG_RECORDS_WITH_TERMINATING_NEW_LINE);
fprintf(stderr, "Testing LZ4_compressBound()...\n");
size_t required_compressed_space = LZ4_compressBound(item.text_size);
if(!required_compressed_space){
fprintf(stderr, "- Error while using LZ4_compressBound()\n");
return ++errors;
}
item.data_max_size = item.text_size + required_compressed_space;
item.data = mallocz(item.data_max_size);
memcpy(item.data, LOG_RECORDS_WITH_TERMINATING_NEW_LINE, sizeof(LOG_RECORDS_WITH_TERMINATING_NEW_LINE));
fprintf(stderr, "Testing LZ4_compress_fast()...\n");
item.text_compressed = item.data + item.text_size;
item.text_compressed_size = LZ4_compress_fast( item.data, item.text_compressed,
item.text_size, required_compressed_space, 1);
if(!item.text_compressed_size){
fprintf(stderr, "- Error while using LZ4_compress_fast()\n");
return ++errors;
}
char *decompressed_text = mallocz(item.text_size);
if(LZ4_decompress_safe( item.text_compressed,
decompressed_text,
item.text_compressed_size,
item.text_size) < 0){
fprintf(stderr, "- Error in decompress_text()\n");
return ++errors;
}
if(memcmp(item.data, decompressed_text, item.text_size)){
fprintf(stderr, "- Error, original and decompressed data not the same\n");
++errors;
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
static int test_read_last_line() {
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
#if defined(_WIN32) || defined(_WIN64)
char tmpname[MAX_PATH] = "/tmp/tmp.XXXXXX";
#else
char tmpname[] = "/tmp/tmp.XXXXXX";
#endif
int fd = mkstemp(tmpname);
if (fd == -1){
fprintf(stderr, "mkstemp() Failed with error %s\n", strerror(errno));
exit(EXIT_FAILURE);
}
FILE *tmpfp = fdopen(fd, "r+");
if (tmpfp == NULL) {
close(fd);
unlink(tmpname);
exit(EXIT_FAILURE);
}
if(fprintf(tmpfp, "%s", LOG_RECORDS_WITHOUT_TERMINATING_NEW_LINE) <= 0){
close(fd);
unlink(tmpname);
exit(EXIT_FAILURE);
}
fflush(tmpfp);
fprintf(stderr, "Testing read of LOG_RECORD_WITHOUT_NEW_LINE...\n");
errors += strcmp(LOG_RECORD_WITHOUT_NEW_LINE, read_last_line(tmpname, 0)) ? 1 : 0;
if(fprintf(tmpfp, "\n%s", LOG_RECORD_WITH_NEW_LINE) <= 0){
close(fd);
unlink(tmpname);
exit(EXIT_FAILURE);
}
fflush(tmpfp);
fprintf(stderr, "Testing read of LOG_RECORD_WITH_NEW_LINE...\n");
errors += strcmp(LOG_RECORD_WITH_NEW_LINE, read_last_line(tmpname, 0)) ? 1 : 0;
unlink(tmpname);
close(fd);
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
const char * const parse_configs_to_test[] = {
/* [1] Apache csvCombined 1 */
"127.0.0.1 - - [15/Oct/2020:04:43:51 -0700] \"GET / HTTP/1.0\" 200 11228 \"-\" \"ApacheBench/2.3\"",
/* [2] Apache csvCombined 2 - extra white space */
"::1 - - [01/Sep/2022:19:04:42 +0100] \"GET / HTTP/1.1\" 200 3477 \"-\" \"Mozilla/5.0 (Windows NT 10.0; \
Win64; x64; rv:103.0) Gecko/20100101 Firefox/103.0\"",
/* [3] Apache csvCombined 3 - with new line */
"209.202.252.202 - rosenbaum7551 [20/Jun/2023:14:42:27 +0000] \"PUT /harness/networks/initiatives/engineer HTTP/2.0\"\
403 42410 \"https://www.senioriterate.name/streamline/exploit\" \"Opera/10.54 (Macintosh; Intel Mac OS X 10_7_6;\
en-US) Presto/2.12.334 Version/10.00\"\n",
/* [4] Apache csvCombined 4 - invalid request field */
"::1 - - [13/Jul/2023:21:00:56 +0100] \"-\" 408 - \"-\" \"-\"",
/* [5] Apache csvVhostCombined */
"XPS-wsl.localdomain:80 ::1 - - [30/Jun/2022:20:59:29 +0300] \"GET / HTTP/1.1\" 200 3477 \"-\" \"Mozilla\
/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36\
Edg/103.0.1264.37\"",
/* [6] Apache csvCommon 1 */
"127.0.0.1 - - [30/Jun/2022:16:43:51 +0300] \"GET / HTTP/1.0\" 200 11228",
/* [7] Apache csvCommon 2 - with carriage return */
"180.89.137.89 - barrows1527 [05/Jun/2023:17:46:08 +0000]\
\"DELETE /b2c/viral/innovative/reintermediate HTTP/1.0\" 416 99\r",
/* [8] Apache csvCommon 3 - with new line */
"212.113.230.101 - - [20/Jun/2023:14:29:49 +0000] \"PATCH /strategic HTTP/1.1\" 404 1217\n",
/* [9] Apache csvVhostCommon 1 */
"XPS-wsl.localdomain:80 127.0.0.1 - - [30/Jun/2022:16:43:51 +0300] \"GET / HTTP/1.0\" 200 11228",
/* [10] Apache csvVhostCommon 2 - with new line and extra white space */
"XPS-wsl.localdomain:80 2001:0db8:85a3:0000:0000:8a2e:0370:7334 - - [30/Jun/2022:16:43:51 +0300] \"GET /\
HTTP/1.0\" 200 11228\n",
/* [11] Nginx csvCombined */
"47.29.201.179 - - [28/Feb/2019:13:17:10 +0000] \"GET /?p=1 HTTP/2.0\" 200 5316 \"https://dot.com/?p=1\"\
\"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36\"",
};
const web_log_line_field_t parse_config_expected[][15] = {
/* [1] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, CUSTOM , CUSTOM , -1, -1, -1, -1, -1}, /* Apache csvCombined 1 */
/* [2] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, CUSTOM , CUSTOM , -1, -1, -1, -1, -1}, /* Apache csvCombined 2 */
/* [3] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, CUSTOM , CUSTOM , -1, -1, -1, -1, -1}, /* Apache csvCombined 3 */
/* [4] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, CUSTOM , CUSTOM , -1, -1, -1, -1, -1}, /* Apache csvCombined 4 */
/* [5] */ {VHOST_WITH_PORT, REQ_CLIENT, CUSTOM, CUSTOM, TIME, TIME, REQ , RESP_CODE, RESP_SIZE, CUSTOM , CUSTOM, -1, -1, -1, -1}, /* Apache csvVhostCombined */
/* [6] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, -1 , -1 , -1, -1, -1, -1, -1}, /* Apache csvCommon 1 */
/* [7] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, -1 , -1 , -1, -1, -1, -1, -1}, /* Apache csvCommon 2 */
/* [8] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ , RESP_CODE, RESP_SIZE, -1 , -1 , -1, -1, -1, -1, -1}, /* Apache csvCommon 3 */
/* [9] */ {VHOST_WITH_PORT, REQ_CLIENT, CUSTOM, CUSTOM, TIME, TIME, REQ , RESP_CODE, RESP_SIZE, -1 , -1, -1, -1, -1, -1}, /* Apache csvVhostCommon 1 */
/* [10] */ {VHOST_WITH_PORT, REQ_CLIENT, CUSTOM, CUSTOM, TIME, TIME, REQ , RESP_CODE, RESP_SIZE, -1 , -1, -1, -1, -1, -1}, /* Apache csvVhostCommon 2 */
/* [11] */ {REQ_CLIENT , CUSTOM , CUSTOM, TIME , TIME, REQ, RESP_CODE, RESP_SIZE, CUSTOM , CUSTOM , -1, -1, -1, -1, -1}, /* Nginx csvCombined */
};
static const char parse_config_delim = ' ';
static int *parse_config_expected_num_fields = NULL;
static void setup_parse_config_expected_num_fields() {
fprintf(stderr, "%s():\n", __FUNCTION__);
for(int i = 0; i < (int) (sizeof(parse_configs_to_test) / sizeof(parse_configs_to_test[0])); i++){
parse_config_expected_num_fields = reallocz(parse_config_expected_num_fields, (i + 1) * sizeof(int));
parse_config_expected_num_fields[i] = 0;
for(int j = 0; (int) parse_config_expected[i][j] != -1; j++){
parse_config_expected_num_fields[i]++;
}
}
fprintf(stderr, "OK\n");
}
static int test_count_fields() {
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
for(int i = 0; i < (int) (sizeof(parse_configs_to_test) / sizeof(parse_configs_to_test[0])); i++){
if(count_fields(parse_configs_to_test[i], parse_config_delim) != parse_config_expected_num_fields[i]){
fprintf(stderr, "- Error (count_fields() result incorrect) for:\n%s", parse_configs_to_test[i]);
++errors;
}
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
static int test_auto_detect_web_log_parser_config() {
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
for(int i = 0; i < (int) (sizeof(parse_configs_to_test) / sizeof(parse_configs_to_test[0])); i++){
size_t line_sz = strlen(parse_configs_to_test[i]) + 1;
char *line = strdupz(parse_configs_to_test[i]);
if(line[line_sz - 2] != '\n' && line[line_sz - 2] != '\r'){
line = reallocz(line, ++line_sz); // +1 to add '\n' char
line[line_sz - 1] = '\0';
line[line_sz - 2] = '\n';
}
Web_log_parser_config_t *wblp_conf = auto_detect_web_log_parser_config(line, parse_config_delim);
if(!wblp_conf){
fprintf(stderr, "- Error (NULL wblp_conf) for:\n%s", line);
++errors;
} else if(wblp_conf->num_fields != parse_config_expected_num_fields[i]){
fprintf(stderr, "- Error (number of fields mismatch) for:\n%s", line);
fprintf(stderr, "Expected %d fields but auto-detected %d\n", parse_config_expected_num_fields[i], wblp_conf->num_fields);
++errors;
} else {
for(int j = 0; (int) parse_config_expected[i][j] != -1; j++){
if(wblp_conf->fields[j] != parse_config_expected[i][j]){
fprintf(stderr, "- Error (field type mismatch) for:\n%s", line);
++errors;
break;
}
}
}
freez(line);
if(wblp_conf) freez(wblp_conf->fields);
freez(wblp_conf);
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
Log_line_parsed_t log_line_parsed_expected[] = {
/* --------------------------------------
char vhost[VHOST_MAX_LEN];
int port;
char req_scheme[REQ_SCHEME_MAX_LEN];
char req_client[REQ_CLIENT_MAX_LEN];
char req_method[REQ_METHOD_MAX_LEN];
char req_URL[REQ_URL_MAX_LEN];
char req_proto[REQ_PROTO_MAX_LEN];
int req_size;
int req_proc_time;
int resp_code;
int resp_size;
int ups_resp_time;
char ssl_proto[SSL_PROTO_MAX_LEN];
char ssl_cipher[SSL_CIPHER_SUITE_MAX_LEN];
int64_t timestamp;
int parsing_errors;
------------------------------------------ */
/* [1] */ {"", 0, "", "127.0.0.1", "GET", "/", "1.0", 0, 0, 200, 11228, 0, "", "", 1602762231, 0},
/* [2] */ {"", 0, "", "::1", "GET", "/", "1.1", 0, 0, 200, 3477 , 0, "", "", 1662055482, 0},
/* [3] */ {"", 0, "", "209.202.252.202", "PUT", "/harness/networks/initiatives/engineer", "2.0", 0, 0, 403, 42410, 0, "", "", 1687272147, 0},
/* [4] */ {"", 0, "", "::1", "-", "", "", 0, 0, 408, 0, 0, "", "", 1689278456, 0},
/* [5] */ {"XPS-wsl.localdomain", 80, "", "::1", "GET", "/", "1.1", 0, 0, 200, 3477 , 0, "", "", 1656611969, 0},
/* [6] */ {"", 0, "", "127.0.0.1", "GET", "/", "1.0", 0, 0, 200, 11228, 0, "", "", 1656596631, 0},
/* [7] */ {"", 0, "", "180.89.137.89", "DELETE", "/b2c/viral/innovative/reintermediate", "1.0", 0, 0, 416, 99 , 0, "", "", 1685987168, 0},
/* [8] */ {"", 0, "", "212.113.230.101", "PATCH", "/strategic", "1.1", 0, 0, 404, 1217 , 0, "", "", 1687271389, 0},
/* [9] */ {"XPS-wsl.localdomain", 80, "", "127.0.0.1", "GET", "/", "1.0", 0, 0, 200, 11228, 0, "", "", 1656596631, 0},
/* [10] */ {"XPS-wsl.localdomain", 80, "", "2001:0db8:85a3:0000:0000:8a2e:0370:7334", "GET", "/", "1.0", 0, 0, 200, 11228, 0, "", "", 1656596631, 0},
/* [11] */ {"", 0, "", "47.29.201.179", "GET", "/?p=1", "2.0", 0, 0, 200, 5316 , 0, "", "", 1551359830, 0}
};
static int test_parse_web_log_line(){
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
Web_log_parser_config_t *wblp_conf = callocz(1, sizeof(Web_log_parser_config_t));
wblp_conf->delimiter = parse_config_delim;
wblp_conf->verify_parsed_logs = 1;
for(int i = 0; i < (int) (sizeof(parse_configs_to_test) / sizeof(parse_configs_to_test[0])); i++){
wblp_conf->num_fields = parse_config_expected_num_fields[i];
wblp_conf->fields = (web_log_line_field_t *) parse_config_expected[i];
Log_line_parsed_t log_line_parsed = (Log_line_parsed_t) {0};
parse_web_log_line( wblp_conf,
(char *) parse_configs_to_test[i],
strlen(parse_configs_to_test[i]),
&log_line_parsed);
if(strcmp(log_line_parsed_expected[i].vhost, log_line_parsed.vhost))
fprintf(stderr, "- Error (parsed vhost:%s != expected vhost:%s) for:\n%s",
log_line_parsed.vhost, log_line_parsed_expected[i].vhost, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].port != log_line_parsed.port)
fprintf(stderr, "- Error (parsed port:%d != expected port:%d) for:\n%s",
log_line_parsed.port, log_line_parsed_expected[i].port, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].req_scheme, log_line_parsed.req_scheme))
fprintf(stderr, "- Error (parsed req_scheme:%s != expected req_scheme:%s) for:\n%s",
log_line_parsed.req_scheme, log_line_parsed_expected[i].req_scheme, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].req_client, log_line_parsed.req_client))
fprintf(stderr, "- Error (parsed req_client:%s != expected req_client:%s) for:\n%s",
log_line_parsed.req_client, log_line_parsed_expected[i].req_client, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].req_method, log_line_parsed.req_method))
fprintf(stderr, "- Error (parsed req_method:%s != expected req_method:%s) for:\n%s",
log_line_parsed.req_method, log_line_parsed_expected[i].req_method, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].req_URL, log_line_parsed.req_URL))
fprintf(stderr, "- Error (parsed req_URL:%s != expected req_URL:%s) for:\n%s",
log_line_parsed.req_URL, log_line_parsed_expected[i].req_URL, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].req_proto, log_line_parsed.req_proto))
fprintf(stderr, "- Error (parsed req_proto:%s != expected req_proto:%s) for:\n%s",
log_line_parsed.req_proto, log_line_parsed_expected[i].req_proto, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].req_size != log_line_parsed.req_size)
fprintf(stderr, "- Error (parsed req_size:%d != expected req_size:%d) for:\n%s",
log_line_parsed.req_size, log_line_parsed_expected[i].req_size, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].req_proc_time != log_line_parsed.req_proc_time)
fprintf(stderr, "- Error (parsed req_proc_time:%d != expected req_proc_time:%d) for:\n%s",
log_line_parsed.req_proc_time, log_line_parsed_expected[i].req_proc_time, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].resp_code != log_line_parsed.resp_code)
fprintf(stderr, "- Error (parsed resp_code:%d != expected resp_code:%d) for:\n%s",
log_line_parsed.resp_code, log_line_parsed_expected[i].resp_code, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].resp_size != log_line_parsed.resp_size)
fprintf(stderr, "- Error (parsed resp_size:%d != expected resp_size:%d) for:\n%s",
log_line_parsed.resp_size, log_line_parsed_expected[i].resp_size, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].ups_resp_time != log_line_parsed.ups_resp_time)
fprintf(stderr, "- Error (parsed ups_resp_time:%d != expected ups_resp_time:%d) for:\n%s",
log_line_parsed.ups_resp_time, log_line_parsed_expected[i].ups_resp_time, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].ssl_proto, log_line_parsed.ssl_proto))
fprintf(stderr, "- Error (parsed ssl_proto:%s != expected ssl_proto:%s) for:\n%s",
log_line_parsed.ssl_proto, log_line_parsed_expected[i].ssl_proto, parse_configs_to_test[i]), ++errors;
if(strcmp(log_line_parsed_expected[i].ssl_cipher, log_line_parsed.ssl_cipher))
fprintf(stderr, "- Error (parsed ssl_cipher:%s != expected ssl_cipher:%s) for:\n%s",
log_line_parsed.ssl_cipher, log_line_parsed_expected[i].ssl_cipher, parse_configs_to_test[i]), ++errors;
if(log_line_parsed_expected[i].timestamp != log_line_parsed.timestamp)
fprintf(stderr, "- Error (parsed timestamp:%" PRId64 " != expected timestamp:%" PRId64 ") for:\n%s",
log_line_parsed.timestamp, log_line_parsed_expected[i].timestamp, parse_configs_to_test[i]), ++errors;
}
freez(wblp_conf);
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors ;
}
const char * const unsanitised_strings[] = { "[test]", "^test$", "{test}",
"(test)", "\\test\\", "test*+.?|", "test&£@"};
const char * const expected_sanitised_strings[] = { "\\[test\\]", "\\^test\\$", "\\{test\\}",
"\\(test\\)", "\\\\test\\\\", "test\\*\\+\\.\\?\\|", "test&£@"};
static int test_sanitise_string(){
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
for(int i = 0; i < (int) (sizeof(unsanitised_strings) / sizeof(unsanitised_strings[0])); i++){
char *sanitised = sanitise_string((char *) unsanitised_strings[i]);
if(strcmp(expected_sanitised_strings[i], sanitised)){
fprintf(stderr, "- Error during sanitise_string() for:%s\n", unsanitised_strings[i]);
++errors;
};
freez(sanitised);
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
char * const regex_src[] = {
"2022-11-07T11:28:27.427519600Z container create e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.932624500Z container start e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.971060500Z container die e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (exitCode=0, image=hello-world, name=xenodochial_lumiere)",
"2022-11-07T11:28:27.427519600Z container create e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.932624500Z container start e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.971060500Z container die e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (exitCode=0, image=hello-world, name=xenodochial_lumiere)",
"2022-11-07T11:28:27.427519600Z container create e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.932624500Z container start e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.971060500Z container die e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (exitCode=0, image=hello-world, name=xenodochial_lumiere)",
"2022-11-07T20:06:36.919980700Z container create bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:36.927728700Z container attach bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:36.958906200Z network connect 178a1988c4173559c721d5e24970eef32aaca41e0e363ff9792c731f917683ed (container=bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234, name=bridge, type=bridge)\n\
2022-11-07T20:06:37.564947300Z container start bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:37.596428500Z container die bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (exitCode=0, image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:38.134325100Z network disconnect 178a1988c4173559c721d5e24970eef32aaca41e0e363ff9792c731f917683ed (container=bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234, name=bridge, type=bridge)",
"Nov 7 21:54:24 X-PC sudo: john : TTY=pts/7 ; PWD=/home/john ; USER=root ; COMMAND=/usr/bin/docker run hello-world\n\
Nov 7 21:54:24 X-PC sudo: pam_unix(sudo:session): session opened for user root by john(uid=0)\n\
Nov 7 21:54:25 X-PC sudo: pam_unix(sudo:session): session closed for user root\n\
Nov 7 21:54:24 X-PC sudo: john : TTY=pts/7 ; PWD=/home/john ; USER=root ; COMMAND=/usr/bin/docker run hello-world\n"
};
const char * const regex_keyword[] = {
"start",
"CONTAINER",
"CONTAINER",
NULL,
NULL
};
const char * const regex_pat_str[] = {
NULL,
NULL,
NULL,
".*\\bcontainer\\b.*\\bhello-world\\b.*",
".*\\bsudo\\b.*\\bCOMMAND=/usr/bin/docker run\\b.*"
};
const int regex_ignore_case[] = {
1,
1,
0,
1,
1
};
const int regex_exp_matches[] = {
1,
3,
0,
4,
2
};
const char * const regex_exp_dst[] = {
"2022-11-07T11:28:27.932624500Z container start e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n",
"2022-11-07T11:28:27.427519600Z container create e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.932624500Z container start e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (image=hello-world, name=xenodochial_lumiere)\n\
2022-11-07T11:28:27.971060500Z container die e0c3c6120c29beb393e4b92773c9aa60006747bddabd352b77bf0b4ad23747a7 (exitCode=0, image=hello-world, name=xenodochial_lumiere)",
"",
"2022-11-07T20:06:36.919980700Z container create bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:36.927728700Z container attach bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:37.564947300Z container start bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (image=hello-world, name=distracted_sinoussi)\n\
2022-11-07T20:06:37.596428500Z container die bd8d4a3338c3e9ab4ca555c6d869dc980f04f10ebdcd9284321c0afecbec1234 (exitCode=0, image=hello-world, name=distracted_sinoussi)",
"Nov 7 21:54:24 X-PC sudo: john : TTY=pts/7 ; PWD=/home/john ; USER=root ; COMMAND=/usr/bin/docker run hello-world\n\
Nov 7 21:54:24 X-PC sudo: john : TTY=pts/7 ; PWD=/home/john ; USER=root ; COMMAND=/usr/bin/docker run hello-world\n"
};
static int test_search_keyword(){
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
for(int i = 0; i < (int) (sizeof(regex_src) / sizeof(regex_src[0])); i++){
regex_t *regex_c = regex_pat_str[i] ? mallocz(sizeof(regex_t)) : NULL;
if(regex_c && regcomp( regex_c, regex_pat_str[i],
regex_ignore_case[i] ? REG_EXTENDED | REG_NEWLINE | REG_ICASE : REG_EXTENDED | REG_NEWLINE))
fatal("Could not compile regular expression:%s", regex_pat_str[i]);
size_t regex_src_sz = strlen(regex_src[i]) + 1;
char *res = callocz(1 , regex_src_sz);
size_t res_sz;
int matches = search_keyword( regex_src[i], regex_src_sz,
res, &res_sz,
regex_keyword[i], regex_c,
regex_ignore_case[i]);
// fprintf(stderr, "\nMatches:%d\nResults:\n%.*s\n", matches, (int) res_sz, res);
if(regex_exp_matches[i] != matches){
fprintf(stderr, "- Error in matches returned from search_keyword() for: regex_src[%d]\n", i);
++errors;
};
if(strncmp(regex_exp_dst[i], res, res_sz - 1)){
fprintf(stderr, "- Error in strncmp() of results from search_keyword() for: regex_src[%d]\n", i);
++errors;
}
if(regex_c) freez(regex_c);
freez(res);
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
static Flb_socket_config_t *p_forward_in_config = NULL;
static flb_srvc_config_t flb_srvc_config = {
.flush = FLB_FLUSH_DEFAULT,
.http_listen = FLB_HTTP_LISTEN_DEFAULT,
.http_port = FLB_HTTP_PORT_DEFAULT,
.http_server = FLB_HTTP_SERVER_DEFAULT,
.log_path = "NULL",
.log_level = FLB_LOG_LEVEL_DEFAULT,
.coro_stack_size = FLB_CORO_STACK_SIZE_DEFAULT
};
static flb_srvc_config_t *p_flb_srvc_config = NULL;
static int test_logsmanag_config_funcs(){
int errors = 0, rc;
fprintf(stderr, "%s():\n", __FUNCTION__);
fprintf(stderr, "Testing get_X_dir() functions...\n");
if(NULL == get_user_config_dir()){
fprintf(stderr, "- Error, get_user_config_dir() returns NULL.\n");
++errors;
}
if(NULL == get_stock_config_dir()){
fprintf(stderr, "- Error, get_stock_config_dir() returns NULL.\n");
++errors;
}
if(NULL == get_log_dir()){
fprintf(stderr, "- Error, get_log_dir() returns NULL.\n");
++errors;
}
if(NULL == get_cache_dir()){
fprintf(stderr, "- Error, get_cache_dir() returns NULL.\n");
++errors;
}
fprintf(stderr, "Testing logs_manag_config_load() when p_flb_srvc_config is NULL...\n");
SUPRESS_STDERR();
rc = logs_manag_config_load(p_flb_srvc_config, &p_forward_in_config, 1);
UNSUPRESS_STDERR();
if(LOGS_MANAG_CONFIG_LOAD_ERROR_P_FLB_SRVC_NULL != rc){
fprintf(stderr, "- Error, logs_manag_config_load() returns %d.\n", rc);
++errors;
}
p_flb_srvc_config = &flb_srvc_config;
fprintf(stderr, "Testing logs_manag_config_load() can load stock config...\n");
SUPRESS_STDERR();
rc = logs_manag_config_load(&flb_srvc_config, &p_forward_in_config, 1);
UNSUPRESS_STDERR();
if( LOGS_MANAG_CONFIG_LOAD_ERROR_OK != rc){
fprintf(stderr, "- Error, logs_manag_config_load() returns %d.\n", rc);
++errors;
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
uv_loop_t *main_loop;
static void setup_p_file_infos_arr_and_main_loop() {
fprintf(stderr, "%s():\n", __FUNCTION__);
p_file_infos_arr = callocz(1, sizeof(struct File_infos_arr));
main_loop = mallocz(sizeof(uv_loop_t));
if(uv_loop_init(main_loop))
exit(EXIT_FAILURE);
fprintf(stderr, "OK\n");
}
static int test_flb_init(){
int errors = 0, rc;
fprintf(stderr, "%s():\n", __FUNCTION__);
fprintf(stderr, "Testing flb_init() with wrong stock_config_dir...\n");
SUPRESS_STDERR();
rc = flb_init(flb_srvc_config, "/tmp");
UNSUPRESS_STDERR();
if(!rc){
fprintf(stderr, "- Error, flb_init() should fail but it returns %d.\n", rc);
++errors;
}
fprintf(stderr, "Testing flb_init() with correct stock_config_dir...\n");
rc = flb_init(flb_srvc_config, get_stock_config_dir());
if(rc){
fprintf(stderr, "- Error, flb_init() should fail but it returns %d.\n", rc);
++errors;
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
static int unlink_cb(const char *fpath, const struct stat *sb, int typeflag, struct FTW *ftwbuf){
UNUSED(sb);
UNUSED(typeflag);
UNUSED(ftwbuf);
return remove(fpath);
}
static int test_db_init(){
int errors = 0;
fprintf(stderr, "%s():\n", __FUNCTION__);
extern netdata_mutex_t stdout_mut;
SUPRESS_STDOUT();
SUPRESS_STDERR();
config_file_load(main_loop, p_forward_in_config, &flb_srvc_config, &stdout_mut);
UNSUPRESS_STDOUT();
UNSUPRESS_STDERR();
fprintf(stderr, "Testing db_init() with main_db_dir == NULL...\n");
SUPRESS_STDERR();
db_set_main_dir(NULL);
int rc = db_init();
UNSUPRESS_STDERR();
if(!rc){
fprintf(stderr, "- Error, db_init() returns %d even though db_set_main_dir(NULL); was called.\n", rc);
++errors;
}
char tmpdir[] = "/tmp/tmpdir.XXXXXX";
char *main_db_dir = mkdtemp (tmpdir);
fprintf(stderr, "Testing db_init() with main_db_dir == %s...\n", main_db_dir);
SUPRESS_STDERR();
db_set_main_dir(main_db_dir);
rc = db_init();
UNSUPRESS_STDERR();
if(rc){
fprintf(stderr, "- Error, db_init() returns %d.\n", rc);
++errors;
}
fprintf(stderr, "Cleaning up %s...\n", main_db_dir);
if(nftw(main_db_dir, unlink_cb, 64, FTW_DEPTH | FTW_PHYS) == -1){
fprintf(stderr, "Error while remove path:%s. Will exit...\n", strerror(errno));
exit(EXIT_FAILURE);
}
fprintf(stderr, "%s\n", errors ? "FAIL" : "OK");
return errors;
}
int logs_management_unittest(void){
int errors = 0;
fprintf(stderr, "\n\n======================================================\n");
fprintf(stderr, " ** Starting logs management tests **\n");
fprintf(stderr, "======================================================\n");
fprintf(stderr, "------------------------------------------------------\n");
errors += test_compression_decompression();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_read_last_line();
fprintf(stderr, "------------------------------------------------------\n");
setup_parse_config_expected_num_fields();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_count_fields();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_auto_detect_web_log_parser_config();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_parse_web_log_line();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_sanitise_string();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_search_keyword();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_logsmanag_config_funcs();
fprintf(stderr, "------------------------------------------------------\n");
setup_p_file_infos_arr_and_main_loop();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_flb_init();
fprintf(stderr, "------------------------------------------------------\n");
errors += test_db_init();
fprintf(stderr, "------------------------------------------------------\n");
fprintf(stderr, "[%s] Total errors: %d\n", errors ? "FAILED" : "SUCCEEDED", errors);
fprintf(stderr, "======================================================\n");
fprintf(stderr, " ** Finished logs management tests **\n");
fprintf(stderr, "======================================================\n");
fflush(stderr);
return errors;
}

View File

@ -0,0 +1,12 @@
// SPDX-License-Identifier: GPL-3.0-or-later
/** @file unit_test.h
* @brief This is the header for unit_test.c
*/
#ifndef LOGS_MANAGEMENT_UNIT_TEST_H_
#define LOGS_MANAGEMENT_UNIT_TEST_H_
int logs_management_unittest(void);
#endif // LOGS_MANAGEMENT_UNIT_TEST_H_

View File

@ -136,12 +136,12 @@ ACLK="${ACLK}"
# keep a log of this command
{
printf "\n# "
date
printf 'CFLAGS="%s" ' "${CFLAGS}"
printf 'LDFLAGS="%s" ' "${LDFLAGS}"
printf "%s" "${PROGRAM}" "${@}"
printf "\n"
printf "\n# "
date
printf 'CFLAGS="%s" ' "${CFLAGS}"
printf 'LDFLAGS="%s" ' "${LDFLAGS}"
printf "%s" "${PROGRAM}" "${@}"
printf "\n"
} >> netdata-installer.log
REINSTALL_OPTIONS="$(
@ -240,6 +240,8 @@ USAGE: ${PROGRAM} [options]
have a broken pkg-config. Use this option to proceed without checking pkg-config.
--disable-telemetry Opt-out from our anonymous telemetry program. (DISABLE_TELEMETRY=1)
--skip-available-ram-check Skip checking the amount of RAM the system has and pretend it has enough to build safely.
--disable-logsmanagement Disable the logs management plugin. Default: autodetect.
--enable-logsmanagement-tests Enable the logs management tests. Default: disabled.
Netdata will by default be compiled with gcc optimization -O2
If you need to pass different CFLAGS, use something like this:
@ -338,6 +340,11 @@ while [ -n "${1}" ]; do
NETDATA_CONFIGURE_OPTIONS="$(echo "${NETDATA_CONFIGURE_OPTIONS%--disable-ml)}" | sed 's/$/ --disable-ml/g')"
NETDATA_ENABLE_ML=0
;;
"--disable-logsmanagement")
NETDATA_CONFIGURE_OPTIONS="$(echo "${NETDATA_CONFIGURE_OPTIONS%--disable-logsmanagement)}" | sed 's/$/ --disable-logsmanagement/g')"
NETDATA_DISABLE_LOGS_MANAGEMENT=1
;;
"--enable-logsmanagement-tests") NETDATA_CONFIGURE_OPTIONS="$(echo "${NETDATA_CONFIGURE_OPTIONS%--enable-logsmanagement-tests)}" | sed 's/$/ --enable-logsmanagement-tests/g')" ;;
"--enable-gtests")
NETDATA_CONFIGURE_OPTIONS="$(echo "${NETDATA_CONFIGURE_OPTIONS%--enable-gtests)}" | sed 's/$/ --enable-gtests/g')"
NETDATA_ENABLE_GTESTS=1
@ -963,6 +970,85 @@ bundle_ebpf_co_re() {
bundle_ebpf_co_re
# -----------------------------------------------------------------------------
build_fluentbit() {
env_cmd="env CFLAGS='-w' CXXFLAGS='-w' LDFLAGS="
if [ -z "${DONT_SCRUB_CFLAGS_EVEN_THOUGH_IT_MAY_BREAK_THINGS}" ]; then
env_cmd="env CFLAGS='-fPIC -pipe -w' CXXFLAGS='-fPIC -pipe -w' LDFLAGS="
fi
mkdir -p fluent-bit/build || return 1
cd fluent-bit/build > /dev/null || return 1
rm CMakeCache.txt > /dev/null 2>&1
if ! run eval "${env_cmd} $1 -C ../../logsmanagement/fluent_bit_build/config.cmake -B./ -S../"; then
cd - > /dev/null || return 1
rm -rf fluent-bit/build > /dev/null 2>&1
return 1
fi
if ! run eval "${env_cmd} ${make} ${MAKEOPTS}"; then
cd - > /dev/null || return 1
rm -rf fluent-bit/build > /dev/null 2>&1
return 1
fi
cd - > /dev/null || return 1
}
bundle_fluentbit() {
progress "Prepare Fluent-Bit"
if [ -n "${NETDATA_DISABLE_LOGS_MANAGEMENT}" ]; then
warning "You have explicitly requested to disable Netdata Logs Management support, Fluent-Bit build is skipped."
return 0
fi
if [ ! -d "fluent-bit" ]; then
run_failed "Missing submodule Fluent-Bit. The install process will continue, but Netdata Logs Management support will be disabled."
return 0
fi
if [ "$(command -v cmake)" ] && [ "$(cmake --version | head -1 | cut -d ' ' -f 3 | cut -c-1)" -ge 3 ]; then
cmake="cmake"
elif [ "$(command -v cmake3)" ]; then
cmake="cmake3"
else
run_failed "Could not find a compatible CMake version (>= 3.0), which is required to build Fluent-Bit. The install process will continue, but Netdata Logs Management support will be disabled."
return 0
fi
patch -N -p1 fluent-bit/CMakeLists.txt -i logsmanagement/fluent_bit_build/CMakeLists.patch
patch -N -p1 fluent-bit/src/flb_log.c -i logsmanagement/fluent_bit_build/flb-log-fmt.patch
# If musl is used, we need to patch chunkio, providing fts has been previously installed.
libc="$(detect_libc)"
if [ "${libc}" = "musl" ]; then
patch -N -p1 fluent-bit/lib/chunkio/src/CMakeLists.txt -i logsmanagement/fluent_bit_build/chunkio-static-lib-fts.patch
patch -N -p1 fluent-bit/cmake/luajit.cmake -i logsmanagement/fluent_bit_build/exclude-luajit.patch
patch -N -p1 fluent-bit/src/flb_network.c -i logsmanagement/fluent_bit_build/xsi-strerror.patch
fi
[ -n "${GITHUB_ACTIONS}" ] && echo "::group::Bundling Fluent-Bit."
if build_fluentbit "$cmake"; then
# If Fluent-Bit built with inotify support, use it.
if [ "$(grep -o '^FLB_HAVE_INOTIFY:INTERNAL=.*' fluent-bit/build/CMakeCache.txt | cut -d '=' -f 2)" ]; then
CFLAGS="${CFLAGS} -DFLB_HAVE_INOTIFY"
fi
FLUENT_BIT_BUILD_SUCCESS=1
run_ok "Fluent-Bit built successfully."
else
run_failed "Failed to build Fluent-Bit, Netdata Logs Management support will be disabled in this build."
fi
[ -n "${GITHUB_ACTIONS}" ] && echo "::endgroup::"
}
bundle_fluentbit
# -----------------------------------------------------------------------------
# If we have the dashboard switching logic, make sure we're on the classic
# dashboard during the install (updates don't work correctly otherwise).
@ -1267,6 +1353,21 @@ if [ "$(id -u)" -eq 0 ]; then
fi
fi
if [ -f "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/logs-management.plugin" ]; then
run chown "root:${NETDATA_GROUP}" "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/logs-management.plugin"
capabilities=0
if ! iscontainer && command -v setcap 1> /dev/null 2>&1; then
run chmod 0750 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/logs-management.plugin"
if run setcap cap_dac_read_search,cap_syslog+ep "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/logs-management.plugin"; then
capabilities=1
fi
fi
if [ $capabilities -eq 0 ]; then
run chmod 4750 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/logs-management.plugin"
fi
fi
if [ -f "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/perf.plugin" ]; then
run chown "root:${NETDATA_GROUP}" "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/perf.plugin"
capabilities=0
@ -1637,6 +1738,43 @@ install_ebpf() {
progress "eBPF Kernel Collector"
install_ebpf
should_install_fluentbit() {
if [ -n "${NETDATA_DISABLE_LOGS_MANAGEMENT}" ]; then
warning "netdata-installer.sh run with --disable-logsmanagement, Fluent-Bit installation is skipped."
return 1
elif [ "${FLUENT_BIT_BUILD_SUCCESS:=0}" -eq 0 ]; then
run_failed "Fluent-Bit was not built successfully, Netdata Logs Management support will be disabled in this build."
return 1
elif [ ! -f fluent-bit/build/lib/libfluent-bit.so ]; then
run_failed "libfluent-bit.so is missing, Netdata Logs Management support will be disabled in this build."
return 1
fi
return 0
}
install_fluentbit() {
if ! should_install_fluentbit; then
return 0
fi
[ -n "${GITHUB_ACTIONS}" ] && echo "::group::Installing Fluent-Bit."
run chown "root:${NETDATA_GROUP}" fluent-bit/build/lib
run chmod 0644 fluent-bit/build/lib/libfluent-bit.so
run cp -a -v fluent-bit/build/lib/libfluent-bit.so "${NETDATA_PREFIX}"/usr/lib/netdata
# Fix paths in logsmanagement.d.conf
run sed -i -e "s|# db dir =.*|db dir = ${NETDATA_CACHE_DIR}\/logs_management_db|g" "${NETDATA_STOCK_CONFIG_DIR}"/logsmanagement.d.conf
run sed -i -e "s|# log file =.*|log file = ${NETDATA_LOG_DIR}\/fluentbit.log|g" "${NETDATA_STOCK_CONFIG_DIR}"/logsmanagement.d.conf
[ -n "${GITHUB_ACTIONS}" ] && echo "::endgroup::"
}
progress "Installing Fluent-Bit plugin"
install_fluentbit
# -----------------------------------------------------------------------------
progress "Telemetry configuration"

View File

@ -171,6 +171,7 @@ Suggests: %{name}-plugin-freeipmi = %{version}
%if 0%{?centos_ver} != 6 && 0%{?centos_ver} != 7 && 0%{?amazon_linux} != 2
Suggests: %{name}-plugin-cups = %{version}
Recommends: %{name}-plugin-systemd-journal = %{version}
Recommends: %{name}-plugin-logs-management = %{version}
%endif
@ -225,6 +226,11 @@ BuildRequires: systemd-devel
BuildRequires: snappy-devel
# end - prometheus remote write dependencies
# logs-management dependencies
BuildRequires: bison
BuildRequires: flex
# end - logs-management dependencies
# #####################################################################
# End of dependency management configuration
# #####################################################################
@ -347,6 +353,10 @@ install -m 0750 -p debugfs.plugin "${RPM_BUILD_ROOT}%{_libexecdir}/%{name}/plugi
# Install systemd-journal.plugin
install -m 4750 -p systemd-journal.plugin "${RPM_BUILD_ROOT}%{_libexecdir}/%{name}/plugins.d/systemd-journal.plugin"
# ###########################################################
# Install logs-management.plugin
#install -m 4750 -p logs-management.plugin "${RPM_BUILD_ROOT}%{_libexecdir}/%{name}/plugins.d/logs-management.plugin"
# ###########################################################
# Install perf.plugin
install -m 4750 -p perf.plugin "${RPM_BUILD_ROOT}%{_libexecdir}/%{name}/plugins.d/perf.plugin"
@ -661,6 +671,11 @@ rm -rf "${RPM_BUILD_ROOT}"
# systemd-journal belongs to a different sub-package
%exclude %{_libexecdir}/%{name}/plugins.d/systemd-journal.plugin
# logs management belongs to a different sub-package
%exclude %{_libexecdir}/%{name}/plugins.d/logs-management.plugin
%exclude %{_libdir}/%{name}/conf.d/logsmanagement.d.conf
%exclude %{_libdir}/%{name}/conf.d/logsmanagement.d
# CUPS belongs to a different sub package
%if 0%{?centos_ver} != 6 && 0%{?centos_ver} != 7
%exclude %{_libexecdir}/%{name}/plugins.d/cups.plugin
@ -979,7 +994,33 @@ fi
# CAP_DAC_READ_SEARCH required for data collection.
%caps(cap_dac_read_search=ep) %attr(0750,root,netdata) %{_libexecdir}/%{name}/plugins.d/systemd-journal.plugin
%package plugin-logs-management
Summary: The logs-management plugin for the Netdata Agent
Group: Applications/System
Requires: %{name} = %{version}
Conflicts: %{name} < %{version}
%description plugin-logs-management
This plugin allows the Netdata Agent to collect logs from the system
and parse them to extract metrics.
%pre plugin-logs-management
if ! getent group %{name} > /dev/null; then
groupadd --system %{name}
fi
%files plugin-logs-management
%defattr(0644,root,netdata,0755)
%{_libdir}/%{name}/conf.d/logsmanagement.d.conf
%{_libdir}/%{name}/conf.d/logsmanagement.d
%defattr(0750,root,netdata,0750)
# CAP_DAC_READ_SEARCH and CAP_SYSLOG needed for data collection.
%caps(cap_dac_read_search,cap_syslog=ep) %attr(0750,root,netdata) %{_libexecdir}/%{name}/plugins.d/logs-management.plugin
%changelog
* Thu Oct 26 2023 Austin Hemmelgarn <austin@netdata.cloud> 0.0.0-24
- Add package for logs-management plugin
* Mon Aug 28 2023 Konstantin Shalygin <k0ste@k0ste.ru> 0.0.0-23
- Build go.d.plugin natively for CentOS Stream distro
* Mon Aug 21 2023 Austin Hemmelgarn <austin@netdata.cloud> 0.0.0-22

View File

@ -91,6 +91,7 @@ RUN mkdir -p /opt/src /var/log/netdata && \
ln -sf /dev/stderr /var/log/netdata/error.log && \
ln -sf /dev/stderr /var/log/netdata/daemon.log && \
ln -sf /dev/stdout /var/log/netdata/collector.log && \
ln -sf /dev/stdout /var/log/netdata/fluentbit.log && \
ln -sf /dev/stdout /var/log/netdata/health.log && \
addgroup -g ${NETDATA_GID} -S "${DOCKER_GRP}" && \
adduser -S -H -s /usr/sbin/nologin -u ${NETDATA_GID} -h /etc/netdata -G "${DOCKER_GRP}" "${DOCKER_USR}"

View File

@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/sh
# Package tree used for installing netdata on distribution:
# << Alpine: [3.12] [3.13] [3.14] [3.15] [edge] >>
@ -31,6 +31,9 @@ package_tree="
util-linux-dev
libmnl-dev
json-c-dev
musl-fts-dev
bison
flex
yaml-dev
"
@ -67,7 +70,8 @@ check_flags() {
done
if [ "${DONT_WAIT}" -eq 0 ] && [ "${NON_INTERACTIVE}" -eq 0 ]; then
read -r -p "Press ENTER to run it > " || exit 1
printf "Press ENTER to run it > "
read -r || exit 1
fi
}
@ -76,8 +80,18 @@ check_flags ${@}
packages_to_install=
handle_old_alpine() {
version="$(grep VERSION_ID /etc/os-release | cut -f 2 -d '=')"
major="$(echo "${version}" | cut -f 1 -d '.')"
minor="$(echo "${version}" | cut -f 2 -d '.')"
if [ "${major}" -le 3 ] && [ "${minor}" -le 16 ]; then
package_tree="$(echo "${package_tree}" | sed 's/musl-fts-dev/fts-dev/')"
fi
}
for package in $package_tree; do
if apk -e info "$package" &> /dev/null; then
if apk -e info "$package" > /dev/null 2>&1 ; then
echo "Package '${package}' is installed"
else
echo "Package '${package}' is NOT installed"
@ -85,7 +99,7 @@ for package in $package_tree; do
fi
done
if [[ -z $packages_to_install ]]; then
if [ -z "${packages_to_install}" ]; then
echo "All required packages are already installed. Skipping .."
else
echo "packages_to_install:" "$packages_to_install"

View File

@ -32,6 +32,8 @@ declare -a package_tree=(
gzip
python3
binutils
bison
flex
)
usage() {

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
# Package tree used for installing netdata on distribution:
# << CentOS: [7] [8] >>
# << CentOS: [7] [8] [9] >>
set -e
@ -8,9 +8,12 @@ declare -a package_tree=(
autoconf
autoconf-archive
automake
bison
cmake
cmake3
curl
elfutils-libelf-devel
flex
findutils
gcc
gcc-c++

View File

@ -12,8 +12,10 @@ package_tree="
autoconf-archive
autogen
automake
bison
cmake
curl
flex
g++
gcc
git

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
# Package tree used for installing netdata on distribution:
# << Fedora: [24->35] >>
# << Fedora: [24->38] >>
set -e
@ -28,10 +28,12 @@ declare -a package_tree=(
autoconf-archive
autogen
automake
bison
cmake
curl
elfutils-libelf-devel
findutils
flex
gcc
gcc-c++
git

View File

@ -26,6 +26,8 @@ package_tree="
liblz4
openssl
python3
bison
flex
"
prompt() {

View File

@ -31,6 +31,8 @@ package_tree="
virtual/libelf
dev-lang/python
dev-libs/libuv
sys-devel/bison
sys-devel/flex
"
usage() {
cat << EOF

View File

@ -12,9 +12,11 @@ declare -a package_tree=(
autoconf-archive
autogen
automake
bison
cmake
curl
elfutils-libelf-devel
flex
gcc
gcc-c++
git

View File

@ -14,8 +14,10 @@ declare -a package_tree=(
autoconf-archive
autogen
automake
bison
cmake
curl
flex
gcc
gcc-c++
git

View File

@ -12,10 +12,12 @@ declare -a package_tree=(
autoconf-archive
autogen
automake
bison
cmake
curl
elfutils-libelf-devel
findutils
flex
gcc
gcc-c++
git

View File

@ -12,8 +12,10 @@ package_tree="
autoconf-archive
autogen
automake
bison
cmake
curl
flex
g++
gcc
git

View File

@ -672,6 +672,30 @@ declare -A pkg_cmake=(
['default']="cmake"
)
# bison and flex are required by Fluent-Bit
declare -A pkg_bison=(
['default']="bison"
)
declare -A pkg_flex=(
['default']="flex"
)
# fts-dev is required by Fluent-Bit on Alpine
declare -A pkg_fts_dev=(
['default']="NOTREQUIRED"
['alpine']="musl-fts-dev"
['alpine-3.16.7']="fts-dev"
['alpine-3.15.10']="fts-dev"
['alpine-3.14.10']="fts-dev"
)
# cmake3 is required by Fluent-Bit on CentOS 7
declare -A pkg_cmake3=(
['default']="NOTREQUIRED"
['centos-7']="cmake3"
)
declare -A pkg_json_c_dev=(
['alpine']="json-c-dev"
['arch']="json-c"
@ -1222,6 +1246,7 @@ packages() {
require_cmd automake || suitable_package automake
require_cmd pkg-config || suitable_package pkg-config
require_cmd cmake || suitable_package cmake
require_cmd cmake3 || suitable_package cmake3
# -------------------------------------------------------------------------
# debugging tools for development
@ -1244,6 +1269,8 @@ packages() {
require_cmd tar || suitable_package tar
require_cmd curl || suitable_package curl
require_cmd gzip || suitable_package gzip
require_cmd bison || suitable_package bison
require_cmd flex || suitable_package flex
fi
# -------------------------------------------------------------------------
@ -1275,6 +1302,7 @@ packages() {
suitable_package libuuid-dev
suitable_package libmnl-dev
suitable_package json-c-dev
suitable_package fts-dev
suitable_package libyaml-dev
suitable_package libsystemd-dev
fi

View File

@ -62,6 +62,8 @@ CapabilityBoundingSet=CAP_NET_ADMIN
CapabilityBoundingSet=CAP_SETGID CAP_SETUID
# is required to change file ownership
CapabilityBoundingSet=CAP_CHOWN
# is required for logs-management.plugin
CapabilityBoundingSet=CAP_SYSLOG
# Sandboxing
ProtectSystem=full

View File

@ -26,7 +26,8 @@ install_netdata() {
--dont-start-it \
--enable-plugin-nfacct \
--enable-plugin-freeipmi \
--disable-lto
--disable-lto \
--enable-logsmanagement-tests
}
c_unit_tests() {

View File

@ -780,6 +780,49 @@ netdataDashboard.menu = {
title: 'Consul',
icon: '<i class="fas fa-circle-notch"></i>',
info: 'Consul performance and health metrics. For details, see <a href="https://developer.hashicorp.com/consul/docs/agent/telemetry#key-metrics" target="_blank">Key Metrics</a>.'
},
'kmsg Logs': {
title: 'kmsg Logs',
icon: '<i class="fas fa-book"></i>',
info: 'Metrics extracted from log messages collected from the Kernel log buffer. For details, see <a href="https://docs.fluentbit.io/manual/pipeline/inputs/kernel-logs" target="_blank">the Fluent Bit Kernel Logs plugin</a>.'
},
'Systemd Logs': {
title: 'Systemd Logs',
icon: '<i class="fas fa-book"></i>',
info: 'Metrics extracted from log messages collected from the Journald daemon. For details, see <a href="https://docs.fluentbit.io/manual/pipeline/inputs/systemd" target="_blank">the Fluent Bit Systemd plugin</a>.'
},
'docker_events_logs': {
title: 'Docker Events Logs',
icon: '<i class="fas fa-book"></i>',
info: 'Docker server events metrics. For details, see <a href="https://docs.fluentbit.io/manual/pipeline/inputs/docker-events" target="_blank">the Fluent Bit Docker Events plugin</a> ' +
'and <a href="https://docs.docker.com/engine/reference/commandline/events/" target="_blank">the official Docker Events documentation</a>.'
},
'Apache access.log': {
title: 'Apache access.log',
icon: '<i class="fas fa-book"></i>',
info: 'Performance metrics exctracted from the Apache server <b>access.log</b>. If Go plugins are enabled, see also <a href="#menu_web_log_apache" target="_blank">the web log apache collector</a>.'
},
'Nginx access.log': {
title: 'Nginx access.log',
icon: '<i class="fas fa-book"></i>',
info: 'Performance metrics exctracted from the Nginx server <b>access.log</b>. If Go plugins are enabled, see also <a href="#menu_web_log_nginx" target="_blank">the web log nginx collector</a>.'
},
'Netdata error.log': {
title: 'Netdata error.log',
icon: '<i class="fas fa-book"></i>',
info: 'Metrics extracted from Netdata\'s error.log.'
},
'Netdata fluentbit.log': {
title: 'Netdata fluentbit.log',
icon: '<i class="fas fa-book"></i>',
info: 'Metrics extracted from Netdata\'s embedded Fluent Bit logs.'
}
};
@ -8208,6 +8251,14 @@ netdataDashboard.context = {
'nvme.device_thermal_mgmt_temp2_time': {
info: 'The amount of time the controller has entered lower active power states or performed vendor-specific thermal management actions, <b>regardless of the impact on performance (e.g., heavy throttling)</b>, to attempt to lower the Combined Temperature due to the host-managed thermal management feature.'
},
// ------------------------------------------------------------------------
// Logs Management
'docker_events_logs.events_type': {
info: 'The Docker object type of the event. See <a href="https://docs.docker.com/engine/reference/commandline/events/#description" target="_blank">here</a> for more information.'
},
// ------------------------------------------------------------------------
};