docs: replace 'master' with 'primary' where appropriate.

Also changed "in the primary" to "on the primary", and added a few
"the" before "primary".

Author: Andres Freund
Reviewed-By: David Steele
Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue@alap3.anarazel.de
This commit is contained in:
Andres Freund 2020-06-15 10:12:58 -07:00
parent e07633646a
commit 9e101cf606
15 changed files with 86 additions and 87 deletions

View File

@ -253,7 +253,7 @@ SET client_min_messages = DEBUG1;
implies that operating system collation rules must never change.
Though rare, updates to operating system collation rules can
cause these issues. More commonly, an inconsistency in the
collation order between a master server and a standby server is
collation order between a primary server and a standby server is
implicated, possibly because the <emphasis>major</emphasis> operating
system version in use is inconsistent. Such inconsistencies will
generally only arise on standby servers, and so can generally

View File

@ -964,7 +964,7 @@ SELECT * FROM pg_stop_backup(false, true);
non-exclusive one, but it differs in a few key steps. This type of
backup can only be taken on a primary and does not allow concurrent
backups. Moreover, because it creates a backup label file, as
described below, it can block automatic restart of the master server
described below, it can block automatic restart of the primary server
after a crash. On the other hand, the erroneous removal of this
file from a backup or standby is a common mistake, which can result
in serious data corruption. If it is necessary to use this method,
@ -1033,9 +1033,9 @@ SELECT pg_start_backup('label', true);
this will result in corruption. Confusion about when it is appropriate
to remove this file is a common cause of data corruption when using this
method; be very certain that you remove the file only on an existing
master and never when building a standby or restoring a backup, even if
primary and never when building a standby or restoring a backup, even if
you are building a standby that will subsequently be promoted to a new
master.
primary.
</para>
</listitem>
<listitem>
@ -1128,16 +1128,16 @@ SELECT pg_stop_backup();
<para>
It is often a good idea to also omit from the backup the files
within the cluster's <filename>pg_replslot/</filename> directory, so that
replication slots that exist on the master do not become part of the
replication slots that exist on the primary do not become part of the
backup. Otherwise, the subsequent use of the backup to create a standby
may result in indefinite retention of WAL files on the standby, and
possibly bloat on the master if hot standby feedback is enabled, because
possibly bloat on the primary if hot standby feedback is enabled, because
the clients that are using those replication slots will still be connecting
to and updating the slots on the master, not the standby. Even if the
backup is only intended for use in creating a new master, copying the
to and updating the slots on the primary, not the standby. Even if the
backup is only intended for use in creating a new primary, copying the
replication slots isn't expected to be particularly useful, since the
contents of those slots will likely be badly out of date by the time
the new master comes on line.
the new primary comes on line.
</para>
<para>

View File

@ -697,7 +697,7 @@ include_dir 'conf.d'
<para>
When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries
same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
@ -1643,7 +1643,7 @@ include_dir 'conf.d'
<para>
When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries
same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
@ -2259,7 +2259,7 @@ include_dir 'conf.d'
<para>
When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries
same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
@ -3253,7 +3253,7 @@ include_dir 'conf.d'
<varname>archive_timeout</varname> &mdash; it will bloat your archive
storage. <varname>archive_timeout</varname> settings of a minute or so are
usually reasonable. You should consider using streaming replication,
instead of archiving, if you want data to be copied off the master
instead of archiving, if you want data to be copied off the primary
server more quickly than that.
If this value is specified without units, it is taken as seconds.
This parameter can only be set in the
@ -3678,12 +3678,12 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
These settings control the behavior of the built-in
<firstterm>streaming replication</firstterm> feature (see
<xref linkend="streaming-replication"/>). Servers will be either a
master or a standby server. Masters can send data, while standbys
primary or a standby server. Primaries can send data, while standbys
are always receivers of replicated data. When cascading replication
(see <xref linkend="cascading-replication"/>) is used, standby servers
can also be senders, as well as receivers.
Parameters are mainly for sending and standby servers, though some
parameters have meaning only on the master server. Settings may vary
parameters have meaning only on the primary server. Settings may vary
across the cluster without problems if that is required.
</para>
@ -3693,10 +3693,10 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para>
These parameters can be set on any server that is
to send replication data to one or more standby servers.
The master is always a sending server, so these parameters must
always be set on the master.
The primary is always a sending server, so these parameters must
always be set on the primary.
The role and meaning of these parameters does not change after a
standby becomes the master.
standby becomes the primary.
</para>
<variablelist>
@ -3724,7 +3724,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para>
When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries
same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
@ -3855,19 +3855,19 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
</variablelist>
</sect2>
<sect2 id="runtime-config-replication-master">
<title>Master Server</title>
<sect2 id="runtime-config-replication-primary">
<title>Primary Server</title>
<para>
These parameters can be set on the master/primary server that is
These parameters can be set on the primary server that is
to send replication data to one or more standby servers.
Note that in addition to these parameters,
<xref linkend="guc-wal-level"/> must be set appropriately on the master
<xref linkend="guc-wal-level"/> must be set appropriately on the primary
server, and optionally WAL archiving can be enabled as
well (see <xref linkend="runtime-config-wal-archiving"/>).
The values of these parameters on standby servers are irrelevant,
although you may wish to set them there in preparation for the
possibility of a standby becoming the master.
possibility of a standby becoming the primary.
</para>
<variablelist>
@ -4042,7 +4042,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
<para>
These settings control the behavior of a standby server that is
to receive replication data. Their values on the master server
to receive replication data. Their values on the primary server
are irrelevant.
</para>
@ -4369,7 +4369,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
of time. For example, if
you set this parameter to <literal>5min</literal>, the standby will
replay each transaction commit only when the system time on the standby
is at least five minutes past the commit time reported by the master.
is at least five minutes past the commit time reported by the primary.
If this value is specified without units, it is taken as milliseconds.
The default is zero, adding no delay.
</para>
@ -4377,10 +4377,10 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
It is possible that the replication delay between servers exceeds the
value of this parameter, in which case no delay is added.
Note that the delay is calculated between the WAL time stamp as written
on master and the current time on the standby. Delays in transfer
on primary and the current time on the standby. Delays in transfer
because of network lag or cascading replication configurations
may reduce the actual wait time significantly. If the system
clocks on master and standby are not synchronized, this may lead to
clocks on primary and standby are not synchronized, this may lead to
recovery applying records earlier than expected; but that is not a
major issue because useful settings of this parameter are much larger
than typical time deviations between servers.
@ -4402,7 +4402,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
except crash recovery.
<varname>hot_standby_feedback</varname> will be delayed by use of this feature
which could lead to bloat on the master; use both together with care.
which could lead to bloat on the primary; use both together with care.
<warning>
<para>
@ -8998,7 +8998,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<para>
When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries
same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>

View File

@ -244,7 +244,7 @@
<productname>PostgreSQL</productname> replication solutions can be developed
externally. For example, <application> <ulink
url="http://www.slony.info">Slony-I</ulink></application> is a popular
master/standby replication solution that is developed independently
primary/standby replication solution that is developed independently
from the core project.
</para>
</sect1>

View File

@ -120,7 +120,7 @@
system residing on another computer. The only restriction is that
the mirroring must be done in a way that ensures the standby server
has a consistent copy of the file system &mdash; specifically, writes
to the standby must be done in the same order as those on the master.
to the standby must be done in the same order as those on the primary.
<productname>DRBD</productname> is a popular file system replication solution
for Linux.
</para>
@ -146,7 +146,7 @@ protocol to make nodes agree on a serializable transactional order.
stream of write-ahead log (<acronym>WAL</acronym>)
records. If the main server fails, the standby contains
almost all of the data of the main server, and can be quickly
made the new master database server. This can be synchronous or
made the new primary database server. This can be synchronous or
asynchronous and can only be done for the entire database server.
</para>
<para>
@ -167,7 +167,7 @@ protocol to make nodes agree on a serializable transactional order.
logical replication constructs a stream of logical data modifications
from the WAL. Logical replication allows the data changes from
individual tables to be replicated. Logical replication doesn't require
a particular server to be designated as a master or a replica but allows
a particular server to be designated as a primary or a replica but allows
data to flow in multiple directions. For more information on logical
replication, see <xref linkend="logical-replication"/>. Through the
logical decoding interface (<xref linkend="logicaldecoding"/>),
@ -219,9 +219,9 @@ protocol to make nodes agree on a serializable transactional order.
this is unacceptable, either the middleware or the application
must query such values from a single server and then use those
values in write queries. Another option is to use this replication
option with a traditional master-standby setup, i.e. data modification
queries are sent only to the master and are propagated to the
standby servers via master-standby replication, not by the replication
option with a traditional primary-standby setup, i.e. data modification
queries are sent only to the primary and are propagated to the
standby servers via primary-standby replication, not by the replication
middleware. Care must also be taken that all
transactions either commit or abort on all servers, perhaps
using two-phase commit (<xref linkend="sql-prepare-transaction"/>
@ -263,7 +263,7 @@ protocol to make nodes agree on a serializable transactional order.
to reduce the communication overhead. Synchronous multimaster
replication is best for mostly read workloads, though its big
advantage is that any server can accept write requests &mdash;
there is no need to partition workloads between master and
there is no need to partition workloads between primary and
standby servers, and because the data changes are sent from one
server to another, there is no problem with non-deterministic
functions like <function>random()</function>.
@ -363,7 +363,7 @@ protocol to make nodes agree on a serializable transactional order.
</row>
<row>
<entry>No master server overhead</entry>
<entry>No overhead on primary</entry>
<entry align="center">&bull;</entry>
<entry align="center"></entry>
<entry align="center">&bull;</entry>
@ -387,7 +387,7 @@ protocol to make nodes agree on a serializable transactional order.
</row>
<row>
<entry>Master failure will never lose data</entry>
<entry>Primary failure will never lose data</entry>
<entry align="center">&bull;</entry>
<entry align="center">&bull;</entry>
<entry align="center">with sync on</entry>
@ -454,7 +454,7 @@ protocol to make nodes agree on a serializable transactional order.
partitioned by offices, e.g., London and Paris, with a server
in each office. If queries combining London and Paris data
are necessary, an application can query both servers, or
master/standby replication can be used to keep a read-only copy
primary/standby replication can be used to keep a read-only copy
of the other office's data on each server.
</para>
</listitem>
@ -621,13 +621,13 @@ protocol to make nodes agree on a serializable transactional order.
<para>
In standby mode, the server continuously applies WAL received from the
master server. The standby server can read WAL from a WAL archive
(see <xref linkend="guc-restore-command"/>) or directly from the master
primary server. The standby server can read WAL from a WAL archive
(see <xref linkend="guc-restore-command"/>) or directly from the primary
over a TCP connection (streaming replication). The standby server will
also attempt to restore any WAL found in the standby cluster's
<filename>pg_wal</filename> directory. That typically happens after a server
restart, when the standby replays again WAL that was streamed from the
master before the restart, but you can also manually copy files to
primary before the restart, but you can also manually copy files to
<filename>pg_wal</filename> at any time to have them replayed.
</para>
@ -652,20 +652,20 @@ protocol to make nodes agree on a serializable transactional order.
<function>pg_promote()</function> is called, or a trigger file is found
(<varname>promote_trigger_file</varname>). Before failover,
any WAL immediately available in the archive or in <filename>pg_wal</filename> will be
restored, but no attempt is made to connect to the master.
restored, but no attempt is made to connect to the primary.
</para>
</sect2>
<sect2 id="preparing-master-for-standby">
<title>Preparing the Master for Standby Servers</title>
<sect2 id="preparing-primary-for-standby">
<title>Preparing the Primary for Standby Servers</title>
<para>
Set up continuous archiving on the primary to an archive directory
accessible from the standby, as described
in <xref linkend="continuous-archiving"/>. The archive location should be
accessible from the standby even when the master is down, i.e. it should
accessible from the standby even when the primary is down, i.e. it should
reside on the standby server itself or another trusted server, not on
the master server.
the primary server.
</para>
<para>
@ -898,7 +898,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<link linkend="monitoring-pg-stat-replication-view"><structname>
pg_stat_replication</structname></link> view. Large differences between
<function>pg_current_wal_lsn</function> and the view's <literal>sent_lsn</literal> field
might indicate that the master server is under heavy load, while
might indicate that the primary server is under heavy load, while
differences between <literal>sent_lsn</literal> and
<function>pg_last_wal_receive_lsn</function> on the standby might indicate
network delay, or that the standby is under heavy load.
@ -921,9 +921,9 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<secondary>streaming replication</secondary>
</indexterm>
<para>
Replication slots provide an automated way to ensure that the master does
Replication slots provide an automated way to ensure that the primary does
not remove WAL segments until they have been received by all standbys,
and that the master does not remove rows which could cause a
and that the primary does not remove rows which could cause a
<link linkend="hot-standby-conflict">recovery conflict</link> even when the
standby is disconnected.
</para>
@ -1001,23 +1001,22 @@ primary_slot_name = 'node_a_slot'
<para>
The cascading replication feature allows a standby server to accept replication
connections and stream WAL records to other standbys, acting as a relay.
This can be used to reduce the number of direct connections to the master
This can be used to reduce the number of direct connections to the primary
and also to minimize inter-site bandwidth overheads.
</para>
<para>
A standby acting as both a receiver and a sender is known as a cascading
standby. Standbys that are more directly connected to the master are known
standby. Standbys that are more directly connected to the primary are known
as upstream servers, while those standby servers further away are downstream
servers. Cascading replication does not place limits on the number or
arrangement of downstream servers, though each standby connects to only
one upstream server which eventually links to a single master/primary
server.
one upstream server which eventually links to a single primary server.
</para>
<para>
A cascading standby sends not only WAL records received from the
master but also those restored from the archive. So even if the replication
primary but also those restored from the archive. So even if the replication
connection in some upstream connection is terminated, streaming replication
continues downstream for as long as new WAL records are available.
</para>
@ -1033,8 +1032,8 @@ primary_slot_name = 'node_a_slot'
</para>
<para>
If an upstream standby server is promoted to become new master, downstream
servers will continue to stream from the new master if
If an upstream standby server is promoted to become the new primary, downstream
servers will continue to stream from the new primary if
<varname>recovery_target_timeline</varname> is set to <literal>'latest'</literal> (the default).
</para>
@ -1120,7 +1119,7 @@ primary_slot_name = 'node_a_slot'
a non-empty value. <varname>synchronous_commit</varname> must also be set to
<literal>on</literal>, but since this is the default value, typically no change is
required. (See <xref linkend="runtime-config-wal-settings"/> and
<xref linkend="runtime-config-replication-master"/>.)
<xref linkend="runtime-config-replication-primary"/>.)
This configuration will cause each commit to wait for
confirmation that the standby has written the commit record to durable
storage.
@ -1145,8 +1144,8 @@ primary_slot_name = 'node_a_slot'
confirmation that the commit record has been received. These parameters
allow the administrator to specify which standby servers should be
synchronous standbys. Note that the configuration of synchronous
replication is mainly on the master. Named standbys must be directly
connected to the master; the master knows nothing about downstream
replication is mainly on the primary. Named standbys must be directly
connected to the primary; the primary knows nothing about downstream
standby servers using cascaded replication.
</para>
@ -1504,7 +1503,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)'
<para>
Note that in this mode, the server will apply WAL one file at a
time, so if you use the standby server for queries (see Hot Standby),
there is a delay between an action in the master and when the
there is a delay between an action in the primary and when the
action becomes visible in the standby, corresponding the time it takes
to fill up the WAL file. <varname>archive_timeout</varname> can be used to make that delay
shorter. Also note that you can't combine streaming replication with
@ -2049,7 +2048,7 @@ if (!triggered)
cleanup of old row versions when there are no transactions that need to
see them to ensure correct visibility of data according to MVCC rules.
However, this rule can only be applied for transactions executing on the
master. So it is possible that cleanup on the master will remove row
primary. So it is possible that cleanup on the primary will remove row
versions that are still visible to a transaction on the standby.
</para>
@ -2438,7 +2437,7 @@ LOG: database system is ready to accept read only connections
<listitem>
<para>
Valid starting points for standby queries are generated at each
checkpoint on the master. If the standby is shut down while the master
checkpoint on the primary. If the standby is shut down while the primary
is in a shutdown state, it might not be possible to re-enter Hot Standby
until the primary is started up, so that it generates further starting
points in the WAL logs. This situation isn't a problem in the most

View File

@ -7362,7 +7362,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough)
the <literal>host</literal> parameter
matches <application>libpq</application>'s default socket directory path.
In a standby server, a database field of <literal>replication</literal>
matches streaming replication connections made to the master server.
matches streaming replication connections made to the primary server.
The database field is of limited usefulness otherwise, because users have
the same password for all databases in the same cluster.
</para>

View File

@ -99,7 +99,7 @@
<para>
A <firstterm>publication</firstterm> can be defined on any physical
replication master. The node where a publication is defined is referred to
replication primary. The node where a publication is defined is referred to
as <firstterm>publisher</firstterm>. A publication is a set of changes
generated from a table or a group of tables, and might also be described as
a change set or replication set. Each publication exists in only one database.
@ -489,7 +489,7 @@
Because logical replication is based on a similar architecture as
<link linkend="streaming-replication">physical streaming replication</link>,
the monitoring on a publication node is similar to monitoring of a
physical replication master
physical replication primary
(see <xref linkend="streaming-replication-monitoring"/>).
</para>

View File

@ -62,10 +62,10 @@ postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl
(The appropriate invocation of <command>ps</command> varies across different
platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the
master server process. The command arguments
primary server process. The command arguments
shown for it are the same ones used when it was launched. The next five
processes are background worker processes automatically launched by the
master process. (The <quote>stats collector</quote> process will not be present
primary process. (The <quote>stats collector</quote> process will not be present
if you have set the system not to start the statistics collector; likewise
the <quote>autovacuum launcher</quote> process can be disabled.)
Each of the remaining
@ -3545,7 +3545,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
one row per database, showing database-wide statistics about
query cancels occurring due to conflicts with recovery on standby servers.
This view will only contain information on standby servers, since
conflicts do not occur on master servers.
conflicts do not occur on primary servers.
</para>
<table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts">

View File

@ -1642,7 +1642,7 @@ SELECT pg_advisory_lock(q.id) FROM
This level of integrity protection using Serializable transactions
does not yet extend to hot standby mode (<xref linkend="hot-standby"/>).
Because of that, those using hot standby may want to use Repeatable
Read and explicit locking on the master.
Read and explicit locking on the primary.
</para>
</warning>
</sect2>
@ -1744,10 +1744,10 @@ SELECT pg_advisory_lock(q.id) FROM
<xref linkend="hot-standby"/>). The strictest isolation level currently
supported in hot standby mode is Repeatable Read. While performing all
permanent database writes within Serializable transactions on the
master will ensure that all standbys will eventually reach a consistent
primary will ensure that all standbys will eventually reach a consistent
state, a Repeatable Read transaction run on the standby can sometimes
see a transient state that is inconsistent with any serial execution
of the transactions on the master.
of the transactions on the primary.
</para>
<para>

View File

@ -73,7 +73,7 @@ restore_command = 'pg_standby <replaceable>archiveDir</replaceable> %f %p %r'
</para>
<para>
There are two ways to fail over to a <quote>warm standby</quote> database server
when the master server fails:
when the primary server fails:
<variablelist>
<varlistentry>

View File

@ -1793,7 +1793,7 @@ The commands accepted in replication mode are:
<listitem>
<para>
Current timeline ID. Also useful to check that the standby is
consistent with the master.
consistent with the primary.
</para>
</listitem>
</varlistentry>

View File

@ -65,11 +65,11 @@ PostgreSQL documentation
<para>
<application>pg_basebackup</application> can make a base backup from
not only the master but also the standby. To take a backup from the standby,
not only the primary but also the standby. To take a backup from the standby,
set up the standby so that it can accept replication connections (that is, set
<varname>max_wal_senders</varname> and <xref linkend="guc-hot-standby"/>,
and configure <link linkend="auth-pg-hba-conf">host-based authentication</link>).
You will also need to enable <xref linkend="guc-full-page-writes"/> on the master.
You will also need to enable <xref linkend="guc-full-page-writes"/> on the primary.
</para>
<para>
@ -89,13 +89,13 @@ PostgreSQL documentation
</listitem>
<listitem>
<para>
If the standby is promoted to the master during online backup, the backup fails.
If the standby is promoted to the primary during online backup, the backup fails.
</para>
</listitem>
<listitem>
<para>
All WAL records required for the backup must contain sufficient full-page writes,
which requires you to enable <varname>full_page_writes</varname> on the master and
which requires you to enable <varname>full_page_writes</varname> on the primary and
not to use a tool like <application>pg_compresslog</application> as
<varname>archive_command</varname> to remove full-page writes from WAL files.
</para>
@ -328,7 +328,7 @@ PostgreSQL documentation
it will use up two connections configured by the
<xref linkend="guc-max-wal-senders"/> parameter. As long as the
client can keep up with write-ahead log received, using this mode
requires no extra write-ahead logs to be saved on the master.
requires no extra write-ahead logs to be saved on the primary.
</para>
<para>
When tar format mode is used, the write-ahead log files will be

View File

@ -43,8 +43,8 @@ PostgreSQL documentation
<para>
<application>pg_rewind</application> is a tool for synchronizing a PostgreSQL cluster
with another copy of the same cluster, after the clusters' timelines have
diverged. A typical scenario is to bring an old master server back online
after failover as a standby that follows the new master.
diverged. A typical scenario is to bring an old primary server back online
after failover as a standby that follows the new primary.
</para>
<para>

View File

@ -1864,9 +1864,9 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433
This is possible because logical replication supports
replication between different major versions of
<productname>PostgreSQL</productname>. The standby can be on the same computer or
a different computer. Once it has synced up with the master server
a different computer. Once it has synced up with the primary server
(running the older version of <productname>PostgreSQL</productname>), you can
switch masters and make the standby the master and shut down the older
switch primaries and make the standby the primary and shut down the older
database instance. Such a switch-over results in only several seconds
of downtime for an upgrade.
</para>

View File

@ -596,8 +596,8 @@
indicate that the already-processed WAL data need not be scanned again,
and then recycles any old log segment files in the <filename>pg_wal</filename>
directory.
Restartpoints can't be performed more frequently than checkpoints in the
master because restartpoints can only be performed at checkpoint records.
Restartpoints can't be performed more frequently than checkpoints on the
primary because restartpoints can only be performed at checkpoint records.
A restartpoint is triggered when a checkpoint record is reached if at
least <varname>checkpoint_timeout</varname> seconds have passed since the last
restartpoint, or if WAL size is about to exceed