Introduce log_destination=jsonlog

"jsonlog" is a new value that can be added to log_destination to provide
logs in the JSON format, with its output written to a file, making it
the third type of destination of this kind, after "stderr" and
"csvlog".  The format is convenient to feed logs to other applications.
There is also a plugin external to core that provided this feature using
the hook in elog.c, but this had to overwrite the output of "stderr" to
work, so being able to do both at the same time was not possible.  The
files generated by this log format are suffixed with ".json", and use
the same rotation policies as the other two formats depending on the
backend configuration.

This takes advantage of the refactoring work done previously in ac7c807,
bed6ed3, 8b76f89 and 2d77d83 for the backend parts, and 72b76f7 for the
TAP tests, making the addition of any new file-based format rather
straight-forward.

The documentation is updated to list all the keys and the values that
can exist in this new format.  pg_current_logfile() also required a
refresh for the new option.

Author: Sehrope Sarkuni, Michael Paquier
Reviewed-by: Nathan Bossart, Justin Pryzby
Discussion: https://postgr.es/m/CAH7T-aqswBM6JWe4pDehi1uOiufqe06DJWaU5=X7dDLyqUExHg@mail.gmail.com
This commit is contained in:
Michael Paquier 2022-01-17 10:16:53 +09:00
parent 6478896675
commit dc686681e0
12 changed files with 660 additions and 48 deletions

View File

@ -5931,7 +5931,8 @@ SELECT * FROM parent WHERE key = 2400;
<para>
<productname>PostgreSQL</productname> supports several methods
for logging server messages, including
<systemitem>stderr</systemitem>, <systemitem>csvlog</systemitem> and
<systemitem>stderr</systemitem>, <systemitem>csvlog</systemitem>,
<systemitem>jsonlog</systemitem>, and
<systemitem>syslog</systemitem>. On Windows,
<systemitem>eventlog</systemitem> is also supported. Set this
parameter to a list of desired log destinations separated by
@ -5950,25 +5951,35 @@ SELECT * FROM parent WHERE key = 2400;
CSV-format log output.
</para>
<para>
When either <systemitem>stderr</systemitem> or
<systemitem>csvlog</systemitem> are included, the file
<filename>current_logfiles</filename> is created to record the location
of the log file(s) currently in use by the logging collector and the
associated logging destination. This provides a convenient way to
find the logs currently in use by the instance. Here is an example of
this file's content:
If <systemitem>jsonlog</systemitem> is included in
<varname>log_destination</varname>, log entries are output in
<acronym>JSON</acronym> format, which is convenient for loading logs
into programs.
See <xref linkend="runtime-config-logging-jsonlog"/> for details.
<xref linkend="guc-logging-collector"/> must be enabled to generate
JSON-format log output.
</para>
<para>
When either <systemitem>stderr</systemitem>,
<systemitem>csvlog</systemitem> or <systemitem>jsonlog</systemitem> are
included, the file <filename>current_logfiles</filename> is created to
record the location of the log file(s) currently in use by the logging
collector and the associated logging destination. This provides a
convenient way to find the logs currently in use by the instance. Here
is an example of this file's content:
<programlisting>
stderr log/postgresql.log
csvlog log/postgresql.csv
jsonlog log/postgresql.json
</programlisting>
<filename>current_logfiles</filename> is recreated when a new log file
is created as an effect of rotation, and
when <varname>log_destination</varname> is reloaded. It is removed when
neither <systemitem>stderr</systemitem>
nor <systemitem>csvlog</systemitem> are included
in <varname>log_destination</varname>, and when the logging collector is
disabled.
none of <systemitem>stderr</systemitem>,
<systemitem>csvlog</systemitem> or <systemitem>jsonlog</systemitem> are
included in <varname>log_destination</varname>, and when the logging
collector is disabled.
</para>
<note>
@ -6106,6 +6117,13 @@ local0.* /var/log/postgresql
(If <varname>log_filename</varname> ends in <literal>.log</literal>, the suffix is
replaced instead.)
</para>
<para>
If JSON-format output is enabled in <varname>log_destination</varname>,
<literal>.json</literal> will be appended to the timestamped
log file name to create the file name for JSON-format output.
(If <varname>log_filename</varname> ends in <literal>.log</literal>, the suffix is
replaced instead.)
</para>
<para>
This parameter can only be set in the <filename>postgresql.conf</filename>
file or on the server command line.
@ -7467,6 +7485,187 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</orderedlist>
</para>
</sect2>
<sect2 id="runtime-config-logging-jsonlog">
<title>Using JSON-Format Log Output</title>
<para>
Including <literal>jsonlog</literal> in the
<varname>log_destination</varname> list provides a convenient way to
import log files into many different programs. This option emits log
lines in (<acronym>JSON</acronym>) format.
</para>
<para>
String fields with null values are excluded from output.
Additional fields may be added in the future. User applications that
process <literal>jsonlog</literal> output should ignore unknown fields.
</para>
<para>
Each log line is serialized as a JSON object as of the following
set of keys with their values.
</para>
<table>
<title>Keys and values of JSON log entries</title>
<tgroup cols="3">
<thead>
<row>
<entry>Key name</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>timestamp</literal></entry>
<entry>string</entry>
<entry>Time stamp with milliseconds</entry>
</row>
<row>
<entry><literal>user</literal></entry>
<entry>string</entry>
<entry>User name</entry>
</row>
<row>
<entry><literal>dbname</literal></entry>
<entry>string</entry>
<entry>Database name</entry>
</row>
<row>
<entry><literal>pid</literal></entry>
<entry>number</entry>
<entry>Process ID</entry>
</row>
<row>
<entry><literal>remote_host</literal></entry>
<entry>string</entry>
<entry>Client host</entry>
</row>
<row>
<entry><literal>remote_port</literal></entry>
<entry>number</entry>
<entry>Client port</entry>
</row>
<row>
<entry><literal>session_id</literal></entry>
<entry>string</entry>
<entry>Session ID</entry>
</row>
<row>
<entry><literal>line_num</literal></entry>
<entry>number</entry>
<entry>Per-session line number</entry>
</row>
<row>
<entry><literal>ps</literal></entry>
<entry>string</entry>
<entry>Current ps display</entry>
</row>
<row>
<entry><literal>session_start</literal></entry>
<entry>string</entry>
<entry>Session start time</entry>
</row>
<row>
<entry><literal>vxid</literal></entry>
<entry>string</entry>
<entry>Virtual transaction ID</entry>
</row>
<row>
<entry><literal>txid</literal></entry>
<entry>string</entry>
<entry>Regular transaction ID</entry>
</row>
<row>
<entry><literal>error_severity</literal></entry>
<entry>string</entry>
<entry>Error severity</entry>
</row>
<row>
<entry><literal>state_code</literal></entry>
<entry>string</entry>
<entry>SQLSTATE code</entry>
</row>
<row>
<entry><literal>message</literal></entry>
<entry>string</entry>
<entry>Error message</entry>
</row>
<row>
<entry><literal>detail</literal></entry>
<entry>string</entry>
<entry>Error message detail</entry>
</row>
<row>
<entry><literal>hint</literal></entry>
<entry>string</entry>
<entry>Error message hint</entry>
</row>
<row>
<entry><literal>internal_query</literal></entry>
<entry>string</entry>
<entry>Internal query that led to the error</entry>
</row>
<row>
<entry><literal>internal_position</literal></entry>
<entry>number</entry>
<entry>Cursor index into internal query</entry>
</row>
<row>
<entry><literal>context</literal></entry>
<entry>string</entry>
<entry>Error context</entry>
</row>
<row>
<entry><literal>statement</literal></entry>
<entry>string</entry>
<entry>Client-supplied query string</entry>
</row>
<row>
<entry><literal>cursor_position</literal></entry>
<entry>string</entry>
<entry>Cursor index into query string</entry>
</row>
<row>
<entry><literal>func_name</literal></entry>
<entry>string</entry>
<entry>Error location function name</entry>
</row>
<row>
<entry><literal>file_name</literal></entry>
<entry>string</entry>
<entry>File name of error location</entry>
</row>
<row>
<entry><literal>file_line_num</literal></entry>
<entry>number</entry>
<entry>File line number of the error location</entry>
</row>
<row>
<entry><literal>application_name</literal></entry>
<entry>string</entry>
<entry>Client application name</entry>
</row>
<row>
<entry><literal>backend_type</literal></entry>
<entry>string</entry>
<entry>Type of backend</entry>
</row>
<row>
<entry><literal>leader_pid</literal></entry>
<entry>number</entry>
<entry>Process ID of leader for active parallel workers</entry>
</row>
<row>
<entry><literal>query_id</literal></entry>
<entry>number</entry>
<entry>Query ID</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2>
<title>Process Title</title>

View File

@ -22446,10 +22446,12 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
format, <function>pg_current_logfile</function> without an argument
returns the path of the file having the first format found in the
ordered list: <literal>stderr</literal>,
<literal>csvlog</literal>. <literal>NULL</literal> is returned
if no log file has any of these formats.
<literal>csvlog</literal>, <literal>jsonlog</literal>.
<literal>NULL</literal> is returned if no log file has any of these
formats.
To request information about a specific log file format, supply
either <literal>csvlog</literal> or <literal>stderr</literal> as the
either <literal>csvlog</literal>, <literal>jsonlog</literal> or
<literal>stderr</literal> as the
value of the optional parameter. The result is <literal>NULL</literal>
if the log format requested is not configured in
<xref linkend="guc-log-destination"/>.

View File

@ -86,9 +86,11 @@ static bool pipe_eof_seen = false;
static bool rotation_disabled = false;
static FILE *syslogFile = NULL;
static FILE *csvlogFile = NULL;
static FILE *jsonlogFile = NULL;
NON_EXEC_STATIC pg_time_t first_syslogger_file_time = 0;
static char *last_sys_file_name = NULL;
static char *last_csv_file_name = NULL;
static char *last_json_file_name = NULL;
/*
* Buffers for saving partial messages from different backends.
@ -281,6 +283,8 @@ SysLoggerMain(int argc, char *argv[])
last_sys_file_name = logfile_getname(first_syslogger_file_time, NULL);
if (csvlogFile != NULL)
last_csv_file_name = logfile_getname(first_syslogger_file_time, ".csv");
if (jsonlogFile != NULL)
last_json_file_name = logfile_getname(first_syslogger_file_time, ".json");
/* remember active logfile parameters */
currentLogDir = pstrdup(Log_directory);
@ -367,6 +371,14 @@ SysLoggerMain(int argc, char *argv[])
(csvlogFile != NULL))
rotation_requested = true;
/*
* Force a rotation if JSONLOG output was just turned on or off
* and we need to open or close jsonlogFile accordingly.
*/
if (((Log_destination & LOG_DESTINATION_JSONLOG) != 0) !=
(jsonlogFile != NULL))
rotation_requested = true;
/*
* If rotation time parameter changed, reset next rotation time,
* but don't immediately force a rotation.
@ -417,6 +429,12 @@ SysLoggerMain(int argc, char *argv[])
rotation_requested = true;
size_rotation_for |= LOG_DESTINATION_CSVLOG;
}
if (jsonlogFile != NULL &&
ftell(jsonlogFile) >= Log_RotationSize * 1024L)
{
rotation_requested = true;
size_rotation_for |= LOG_DESTINATION_JSONLOG;
}
}
if (rotation_requested)
@ -426,7 +444,9 @@ SysLoggerMain(int argc, char *argv[])
* was sent by pg_rotate_logfile() or "pg_ctl logrotate".
*/
if (!time_based_rotation && size_rotation_for == 0)
size_rotation_for = LOG_DESTINATION_STDERR | LOG_DESTINATION_CSVLOG;
size_rotation_for = LOG_DESTINATION_STDERR |
LOG_DESTINATION_CSVLOG |
LOG_DESTINATION_JSONLOG;
logfile_rotate(time_based_rotation, size_rotation_for);
}
@ -632,6 +652,20 @@ SysLogger_Start(void)
pfree(filename);
}
/*
* Likewise for the initial JSON log file, if that's enabled. (Note that
* we open syslogFile even when only JSON output is nominally enabled,
* since some code paths will write to syslogFile anyway.)
*/
if (Log_destination & LOG_DESTINATION_JSONLOG)
{
filename = logfile_getname(first_syslogger_file_time, ".json");
jsonlogFile = logfile_open(filename, "a", false);
pfree(filename);
}
#ifdef EXEC_BACKEND
switch ((sysloggerPid = syslogger_forkexec()))
#else
@ -729,6 +763,11 @@ SysLogger_Start(void)
fclose(csvlogFile);
csvlogFile = NULL;
}
if (jsonlogFile != NULL)
{
fclose(jsonlogFile);
jsonlogFile = NULL;
}
return (int) sysloggerPid;
}
@ -805,6 +844,7 @@ syslogger_forkexec(void)
int ac = 0;
char filenobuf[32];
char csvfilenobuf[32];
char jsonfilenobuf[32];
av[ac++] = "postgres";
av[ac++] = "--forklog";
@ -817,6 +857,9 @@ syslogger_forkexec(void)
snprintf(csvfilenobuf, sizeof(csvfilenobuf), "%d",
syslogger_fdget(csvlogFile));
av[ac++] = csvfilenobuf;
snprintf(jsonfilenobuf, sizeof(jsonfilenobuf), "%d",
syslogger_fdget(jsonlogFile));
av[ac++] = jsonfilenobuf;
av[ac] = NULL;
Assert(ac < lengthof(av));
@ -834,7 +877,7 @@ syslogger_parseArgs(int argc, char *argv[])
{
int fd;
Assert(argc == 5);
Assert(argc == 6);
argv += 3;
/*
@ -848,6 +891,8 @@ syslogger_parseArgs(int argc, char *argv[])
syslogFile = syslogger_fdopen(fd);
fd = atoi(*argv++);
csvlogFile = syslogger_fdopen(fd);
fd = atoi(*argv++);
jsonlogFile = syslogger_fdopen(fd);
}
#endif /* EXEC_BACKEND */
@ -896,7 +941,9 @@ process_pipe_input(char *logbuffer, int *bytes_in_logbuffer)
/* Do we have a valid header? */
memcpy(&p, cursor, offsetof(PipeProtoHeader, data));
dest_flags = p.flags & (PIPE_PROTO_DEST_STDERR | PIPE_PROTO_DEST_CSVLOG);
dest_flags = p.flags & (PIPE_PROTO_DEST_STDERR |
PIPE_PROTO_DEST_CSVLOG |
PIPE_PROTO_DEST_JSONLOG);
if (p.nuls[0] == '\0' && p.nuls[1] == '\0' &&
p.len > 0 && p.len <= PIPE_MAX_PAYLOAD &&
p.pid != 0 &&
@ -918,6 +965,8 @@ process_pipe_input(char *logbuffer, int *bytes_in_logbuffer)
dest = LOG_DESTINATION_STDERR;
else if ((p.flags & PIPE_PROTO_DEST_CSVLOG) != 0)
dest = LOG_DESTINATION_CSVLOG;
else if ((p.flags & PIPE_PROTO_DEST_JSONLOG) != 0)
dest = LOG_DESTINATION_JSONLOG;
else
{
/* this should never happen as of the header validation */
@ -1097,19 +1146,24 @@ write_syslogger_file(const char *buffer, int count, int destination)
FILE *logfile;
/*
* If we're told to write to csvlogFile, but it's not open, dump the data
* to syslogFile (which is always open) instead. This can happen if CSV
* output is enabled after postmaster start and we've been unable to open
* csvlogFile. There are also race conditions during a parameter change
* whereby backends might send us CSV output before we open csvlogFile or
* after we close it. Writing CSV-formatted output to the regular log
* file isn't great, but it beats dropping log output on the floor.
* If we're told to write to a structured log file, but it's not open,
* dump the data to syslogFile (which is always open) instead. This can
* happen if structured output is enabled after postmaster start and we've
* been unable to open logFile. There are also race conditions during a
* parameter change whereby backends might send us structured output
* before we open the logFile or after we close it. Writing formatted
* output to the regular log file isn't great, but it beats dropping log
* output on the floor.
*
* Think not to improve this by trying to open csvlogFile on-the-fly. Any
* Think not to improve this by trying to open logFile on-the-fly. Any
* failure in that would lead to recursion.
*/
logfile = (destination == LOG_DESTINATION_CSVLOG &&
csvlogFile != NULL) ? csvlogFile : syslogFile;
if ((destination & LOG_DESTINATION_CSVLOG) && csvlogFile != NULL)
logfile = csvlogFile;
else if ((destination & LOG_DESTINATION_JSONLOG) && jsonlogFile != NULL)
logfile = jsonlogFile;
else
logfile = syslogFile;
rc = fwrite(buffer, 1, count, logfile);
@ -1180,7 +1234,8 @@ pipeThread(void *arg)
if (Log_RotationSize > 0)
{
if (ftell(syslogFile) >= Log_RotationSize * 1024L ||
(csvlogFile != NULL && ftell(csvlogFile) >= Log_RotationSize * 1024L))
(csvlogFile != NULL && ftell(csvlogFile) >= Log_RotationSize * 1024L) ||
(jsonlogFile != NULL && ftell(jsonlogFile) >= Log_RotationSize * 1024L))
SetLatch(MyLatch);
}
LeaveCriticalSection(&sysloggerSection);
@ -1292,6 +1347,8 @@ logfile_rotate_dest(bool time_based_rotation, int size_rotation_for,
logFileExt = NULL;
else if (target_dest == LOG_DESTINATION_CSVLOG)
logFileExt = ".csv";
else if (target_dest == LOG_DESTINATION_JSONLOG)
logFileExt = ".json";
else
{
/* cannot happen */
@ -1379,6 +1436,12 @@ logfile_rotate(bool time_based_rotation, int size_rotation_for)
&csvlogFile))
return;
/* file rotation for jsonlog */
if (!logfile_rotate_dest(time_based_rotation, size_rotation_for, fntime,
LOG_DESTINATION_JSONLOG, &last_json_file_name,
&jsonlogFile))
return;
update_metainfo_datafile();
set_next_rotation_time();
@ -1465,7 +1528,8 @@ update_metainfo_datafile(void)
mode_t oumask;
if (!(Log_destination & LOG_DESTINATION_STDERR) &&
!(Log_destination & LOG_DESTINATION_CSVLOG))
!(Log_destination & LOG_DESTINATION_CSVLOG) &&
!(Log_destination & LOG_DESTINATION_JSONLOG))
{
if (unlink(LOG_METAINFO_DATAFILE) < 0 && errno != ENOENT)
ereport(LOG,
@ -1523,6 +1587,19 @@ update_metainfo_datafile(void)
return;
}
}
if (last_json_file_name && (Log_destination & LOG_DESTINATION_JSONLOG))
{
if (fprintf(fh, "jsonlog %s\n", last_json_file_name) < 0)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not write file \"%s\": %m",
LOG_METAINFO_DATAFILE_TMP)));
fclose(fh);
return;
}
}
fclose(fh);
if (rename(LOG_METAINFO_DATAFILE_TMP, LOG_METAINFO_DATAFILE) != 0)

View File

@ -843,11 +843,13 @@ pg_current_logfile(PG_FUNCTION_ARGS)
{
logfmt = text_to_cstring(PG_GETARG_TEXT_PP(0));
if (strcmp(logfmt, "stderr") != 0 && strcmp(logfmt, "csvlog") != 0)
if (strcmp(logfmt, "stderr") != 0 &&
strcmp(logfmt, "csvlog") != 0 &&
strcmp(logfmt, "jsonlog") != 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("log format \"%s\" is not supported", logfmt),
errhint("The supported log formats are \"stderr\" and \"csvlog\".")));
errhint("The supported log formats are \"stderr\", \"csvlog\", and \"jsonlog\".")));
}
fd = AllocateFile(LOG_METAINFO_DATAFILE, "r");

View File

@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
OBJS = \
assert.o \
csvlog.o \
elog.o
elog.o \
jsonlog.o
include $(top_srcdir)/src/backend/common.mk

View File

@ -2984,6 +2984,22 @@ send_message_to_server_log(ErrorData *edata)
fallback_to_stderr = true;
}
/* Write to JSON log, if enabled */
if (Log_destination & LOG_DESTINATION_JSONLOG)
{
/*
* Send JSON data if it's safe to do so (syslogger doesn't need the
* pipe). If this is not possible, fallback to an entry written to
* stderr.
*/
if (redirection_done || MyBackendType == B_LOGGER)
{
write_jsonlog(edata);
}
else
fallback_to_stderr = true;
}
/*
* Write to stderr, if enabled or if required because of a previous
* limitation.
@ -3059,6 +3075,8 @@ write_pipe_chunks(char *data, int len, int dest)
p.proto.flags |= PIPE_PROTO_DEST_STDERR;
else if (dest == LOG_DESTINATION_CSVLOG)
p.proto.flags |= PIPE_PROTO_DEST_CSVLOG;
else if (dest == LOG_DESTINATION_JSONLOG)
p.proto.flags |= PIPE_PROTO_DEST_JSONLOG;
/* write all but the last chunk */
while (len > PIPE_MAX_PAYLOAD)

View File

@ -0,0 +1,303 @@
/*-------------------------------------------------------------------------
*
* jsonlog.c
* JSON logging
*
* Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of Californi
*
*
* IDENTIFICATION
* src/backend/utils/error/jsonlog.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/xact.h"
#include "libpq/libpq.h"
#include "lib/stringinfo.h"
#include "miscadmin.h"
#include "postmaster/bgworker.h"
#include "postmaster/syslogger.h"
#include "storage/lock.h"
#include "storage/proc.h"
#include "tcop/tcopprot.h"
#include "utils/backend_status.h"
#include "utils/elog.h"
#include "utils/guc.h"
#include "utils/json.h"
#include "utils/ps_status.h"
static void appendJSONKeyValueFmt(StringInfo buf, const char *key,
bool escape_key,
const char *fmt,...) pg_attribute_printf(4, 5);
/*
* appendJSONKeyValue
*
* Append to a StringInfo a comma followed by a JSON key and a value.
* The key is always escaped. The value can be escaped optionally, that
* is dependent on the data type of the key.
*/
static void
appendJSONKeyValue(StringInfo buf, const char *key, const char *value,
bool escape_value)
{
Assert(key != NULL);
if (value == NULL)
return;
appendStringInfoChar(buf, ',');
escape_json(buf, key);
appendStringInfoChar(buf, ':');
if (escape_value)
escape_json(buf, value);
else
appendStringInfoString(buf, value);
}
/*
* appendJSONKeyValueFmt
*
* Evaluate the fmt string and then invoke appendJSONKeyValue() as the
* value of the JSON property. Both the key and value will be escaped by
* appendJSONKeyValue().
*/
static void
appendJSONKeyValueFmt(StringInfo buf, const char *key,
bool escape_key, const char *fmt,...)
{
int save_errno = errno;
size_t len = 128; /* initial assumption about buffer size */
char *value;
for (;;)
{
va_list args;
size_t newlen;
/* Allocate result buffer */
value = (char *) palloc(len);
/* Try to format the data. */
errno = save_errno;
va_start(args, fmt);
newlen = pvsnprintf(value, len, fmt, args);
va_end(args);
if (newlen < len)
break; /* success */
/* Release buffer and loop around to try again with larger len. */
pfree(value);
len = newlen;
}
appendJSONKeyValue(buf, key, value, escape_key);
/* Clean up */
pfree(value);
}
/*
* Write logs in json format.
*/
void
write_jsonlog(ErrorData *edata)
{
StringInfoData buf;
char *start_time;
char *log_time;
/* static counter for line numbers */
static long log_line_number = 0;
/* Has the counter been reset in the current process? */
static int log_my_pid = 0;
/*
* This is one of the few places where we'd rather not inherit a static
* variable's value from the postmaster. But since we will, reset it when
* MyProcPid changes.
*/
if (log_my_pid != MyProcPid)
{
log_line_number = 0;
log_my_pid = MyProcPid;
reset_formatted_start_time();
}
log_line_number++;
initStringInfo(&buf);
/* Initialize string */
appendStringInfoChar(&buf, '{');
/* timestamp with milliseconds */
log_time = get_formatted_log_time();
/*
* First property does not use appendJSONKeyValue as it does not have
* comma prefix.
*/
escape_json(&buf, "timestamp");
appendStringInfoChar(&buf, ':');
escape_json(&buf, log_time);
/* username */
if (MyProcPort)
appendJSONKeyValue(&buf, "user", MyProcPort->user_name, true);
/* database name */
if (MyProcPort)
appendJSONKeyValue(&buf, "dbname", MyProcPort->database_name, true);
/* Process ID */
if (MyProcPid != 0)
appendJSONKeyValueFmt(&buf, "pid", false, "%d", MyProcPid);
/* Remote host and port */
if (MyProcPort && MyProcPort->remote_host)
{
appendJSONKeyValue(&buf, "remote_host", MyProcPort->remote_host, true);
if (MyProcPort->remote_port && MyProcPort->remote_port[0] != '\0')
appendJSONKeyValue(&buf, "remote_port", MyProcPort->remote_port, false);
}
/* Session id */
appendJSONKeyValueFmt(&buf, "session_id", true, "%lx.%x",
(long) MyStartTime, MyProcPid);
/* Line number */
appendJSONKeyValueFmt(&buf, "line_num", false, "%ld", log_line_number);
/* PS display */
if (MyProcPort)
{
StringInfoData msgbuf;
const char *psdisp;
int displen;
initStringInfo(&msgbuf);
psdisp = get_ps_display(&displen);
appendBinaryStringInfo(&msgbuf, psdisp, displen);
appendJSONKeyValue(&buf, "ps", msgbuf.data, true);
pfree(msgbuf.data);
}
/* session start timestamp */
start_time = get_formatted_start_time();
appendJSONKeyValue(&buf, "session_start", start_time, true);
/* Virtual transaction id */
/* keep VXID format in sync with lockfuncs.c */
if (MyProc != NULL && MyProc->backendId != InvalidBackendId)
appendJSONKeyValueFmt(&buf, "vxid", true, "%d/%u", MyProc->backendId,
MyProc->lxid);
/* Transaction id */
appendJSONKeyValueFmt(&buf, "txid", false, "%u",
GetTopTransactionIdIfAny());
/* Error severity */
if (edata->elevel)
appendJSONKeyValue(&buf, "error_severity",
(char *) error_severity(edata->elevel), true);
/* SQL state code */
if (edata->sqlerrcode)
appendJSONKeyValue(&buf, "state_code",
unpack_sql_state(edata->sqlerrcode), true);
/* errmessage */
appendJSONKeyValue(&buf, "message", edata->message, true);
/* errdetail or error_detail log */
if (edata->detail_log)
appendJSONKeyValue(&buf, "detail", edata->detail_log, true);
else
appendJSONKeyValue(&buf, "detail", edata->detail, true);
/* errhint */
if (edata->hint)
appendJSONKeyValue(&buf, "hint", edata->hint, true);
/* internal query */
if (edata->internalquery)
appendJSONKeyValue(&buf, "internal_query", edata->internalquery,
true);
/* if printed internal query, print internal pos too */
if (edata->internalpos > 0 && edata->internalquery != NULL)
appendJSONKeyValueFmt(&buf, "internal_position", false, "%u",
edata->internalpos);
/* errcontext */
if (edata->context && !edata->hide_ctx)
appendJSONKeyValue(&buf, "context", edata->context, true);
/* user query --- only reported if not disabled by the caller */
if (check_log_of_query(edata))
{
appendJSONKeyValue(&buf, "statement", debug_query_string, true);
if (edata->cursorpos > 0)
appendJSONKeyValueFmt(&buf, "cursor_position", false, "%d",
edata->cursorpos);
}
/* file error location */
if (Log_error_verbosity >= PGERROR_VERBOSE)
{
if (edata->funcname)
appendJSONKeyValue(&buf, "func_name", edata->funcname, true);
if (edata->filename)
{
appendJSONKeyValue(&buf, "file_name", edata->filename, true);
appendJSONKeyValueFmt(&buf, "file_line_num", false, "%d",
edata->lineno);
}
}
/* Application name */
if (application_name && application_name[0] != '\0')
appendJSONKeyValue(&buf, "application_name", application_name, true);
/* backend type */
appendJSONKeyValue(&buf, "backend_type", get_backend_type_for_log(), true);
/* leader PID */
if (MyProc)
{
PGPROC *leader = MyProc->lockGroupLeader;
/*
* Show the leader only for active parallel workers. This leaves out
* the leader of a parallel group.
*/
if (leader && leader->pid != MyProcPid)
appendJSONKeyValueFmt(&buf, "leader_pid", false, "%d",
leader->pid);
}
/* query id */
appendJSONKeyValueFmt(&buf, "query_id", false, "%lld",
(long long) pgstat_get_my_query_id());
/* Finish string */
appendStringInfoChar(&buf, '}');
appendStringInfoChar(&buf, '\n');
/* If in the syslogger process, try to write messages direct to file */
if (MyBackendType == B_LOGGER)
write_syslogger_file(buf.data, buf.len, LOG_DESTINATION_JSONLOG);
else
write_pipe_chunks(buf.data, buf.len, LOG_DESTINATION_JSONLOG);
pfree(buf.data);
}

View File

@ -4276,7 +4276,7 @@ static struct config_string ConfigureNamesString[] =
{"log_destination", PGC_SIGHUP, LOGGING_WHERE,
gettext_noop("Sets the destination for server log output."),
gettext_noop("Valid values are combinations of \"stderr\", "
"\"syslog\", \"csvlog\", and \"eventlog\", "
"\"syslog\", \"csvlog\", \"jsonlog\" and \"eventlog\", "
"depending on the platform."),
GUC_LIST_INPUT
},
@ -11752,6 +11752,8 @@ check_log_destination(char **newval, void **extra, GucSource source)
newlogdest |= LOG_DESTINATION_STDERR;
else if (pg_strcasecmp(tok, "csvlog") == 0)
newlogdest |= LOG_DESTINATION_CSVLOG;
else if (pg_strcasecmp(tok, "jsonlog") == 0)
newlogdest |= LOG_DESTINATION_JSONLOG;
#ifdef HAVE_SYSLOG
else if (pg_strcasecmp(tok, "syslog") == 0)
newlogdest |= LOG_DESTINATION_SYSLOG;

View File

@ -432,14 +432,15 @@
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# stderr, csvlog, jsonlog, syslog, and
# eventlog, depending on platform.
# csvlog and jsonlog require
# logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
#logging_collector = off # Enable capturing of stderr, jsonlog
# and csvlog into log files. Required
# to be on for csvlogs and jsonlogs.
# (change requires restart)
# These are only used if logging_collector is on:

View File

@ -6,7 +6,7 @@ use warnings;
use PostgreSQL::Test::Cluster;
use PostgreSQL::Test::Utils;
use Test::More tests => 10;
use Test::More tests => 14;
use Time::HiRes qw(usleep);
# Extract the file name of a $format from the contents of
@ -65,7 +65,7 @@ $node->init();
$node->append_conf(
'postgresql.conf', qq(
logging_collector = on
log_destination = 'stderr, csvlog'
log_destination = 'stderr, csvlog, jsonlog'
# these ensure stability of test results:
log_rotation_age = 0
lc_messages = 'C'
@ -96,11 +96,13 @@ note "current_logfiles = $current_logfiles";
like(
$current_logfiles,
qr|^stderr log/postgresql-.*log
csvlog log/postgresql-.*csv$|,
csvlog log/postgresql-.*csv
jsonlog log/postgresql-.*json$|,
'current_logfiles is sane');
check_log_pattern('stderr', $current_logfiles, 'division by zero', $node);
check_log_pattern('csvlog', $current_logfiles, 'division by zero', $node);
check_log_pattern('stderr', $current_logfiles, 'division by zero', $node);
check_log_pattern('csvlog', $current_logfiles, 'division by zero', $node);
check_log_pattern('jsonlog', $current_logfiles, 'division by zero', $node);
# Sleep 2 seconds and ask for log rotation; this should result in
# output into a different log file name.
@ -122,13 +124,15 @@ note "now current_logfiles = $new_current_logfiles";
like(
$new_current_logfiles,
qr|^stderr log/postgresql-.*log
csvlog log/postgresql-.*csv$|,
csvlog log/postgresql-.*csv
jsonlog log/postgresql-.*json$|,
'new current_logfiles is sane');
# Verify that log output gets to this file, too
$node->psql('postgres', 'fee fi fo fum');
check_log_pattern('stderr', $new_current_logfiles, 'syntax error', $node);
check_log_pattern('csvlog', $new_current_logfiles, 'syntax error', $node);
check_log_pattern('stderr', $new_current_logfiles, 'syntax error', $node);
check_log_pattern('csvlog', $new_current_logfiles, 'syntax error', $node);
check_log_pattern('jsonlog', $new_current_logfiles, 'syntax error', $node);
$node->stop();

View File

@ -64,6 +64,7 @@ typedef union
/* log destinations */
#define PIPE_PROTO_DEST_STDERR 0x10
#define PIPE_PROTO_DEST_CSVLOG 0x20
#define PIPE_PROTO_DEST_JSONLOG 0x40
/* GUC options */
extern bool Logging_collector;

View File

@ -436,6 +436,7 @@ extern bool syslog_split_messages;
#define LOG_DESTINATION_SYSLOG 2
#define LOG_DESTINATION_EVENTLOG 4
#define LOG_DESTINATION_CSVLOG 8
#define LOG_DESTINATION_JSONLOG 16
/* Other exported functions */
extern void DebugFileOpen(void);
@ -453,6 +454,7 @@ extern void write_pipe_chunks(char *data, int len, int dest);
/* Destination-specific functions */
extern void write_csvlog(ErrorData *edata);
extern void write_jsonlog(ErrorData *edata);
#ifdef HAVE_SYSLOG
extern void set_syslog_parameters(const char *ident, int facility);