Allow page lock to conflict among parallel group members.

This is required as it is no safer for two related processes to perform
clean up in gin indexes at a time than for unrelated processes to do the
same.  After acquiring page locks, we can acquire relation extension lock
but reverse never happens which means these will also not participate in
deadlock.  So, avoid checking wait edges from this lock.

Currently, the parallel mode is strictly read-only, but after this patch
we have the infrastructure to allow parallel inserts and parallel copy.

Author: Dilip Kumar, Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Sawada Masahiko
Discussion: https://postgr.es/m/CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B+Ss=Gn1LA@mail.gmail.com
This commit is contained in:
Amit Kapila 2020-03-21 08:48:06 +05:30
parent a6d7e9fb28
commit 3ba59ccc89
5 changed files with 53 additions and 47 deletions

View File

@ -322,14 +322,11 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
* functions are present in the query tree.
*
* (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
* MATERIALIZED VIEW to use parallel plans, but this is safe only because
* the command is writing into a completely new table which workers won't
* be able to see. If the workers could see the table, the fact that
* group locking would cause them to ignore the leader's heavyweight
* relation extension lock and GIN page locks would make this unsafe.
* We'll have to fix that somehow if we want to allow parallel inserts in
* general; updates and deletes have additional problems especially around
* combo CIDs.)
* MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
* backend writes into a completely new table. In the future, we can
* extend it to allow workers to write into the table. However, to allow
* parallel updates and deletes, we have to solve other problems,
* especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this

View File

@ -597,21 +597,22 @@ deadlock detection algorithm very much, but it makes the bookkeeping more
complicated.
We choose to regard locks held by processes in the same parallel group as
non-conflicting. This means that two processes in a parallel group can hold a
self-exclusive lock on the same relation at the same time, or one process can
acquire an AccessShareLock while the other already holds AccessExclusiveLock.
This might seem dangerous and could be in some cases (more on that below), but
if we didn't do this then parallel query would be extremely prone to
self-deadlock. For example, a parallel query against a relation on which the
leader already had AccessExclusiveLock would hang, because the workers would
try to lock the same relation and be blocked by the leader; yet the leader
can't finish until it receives completion indications from all workers. An
undetected deadlock results. This is far from the only scenario where such a
problem happens. The same thing will occur if the leader holds only
AccessShareLock, the worker seeks AccessShareLock, but between the time the
leader attempts to acquire the lock and the time the worker attempts to
acquire it, some other process queues up waiting for an AccessExclusiveLock.
In this case, too, an indefinite hang results.
non-conflicting with the exception of relation extension and page locks. This
means that two processes in a parallel group can hold a self-exclusive lock on
the same relation at the same time, or one process can acquire an AccessShareLock
while the other already holds AccessExclusiveLock. This might seem dangerous and
could be in some cases (more on that below), but if we didn't do this then
parallel query would be extremely prone to self-deadlock. For example, a
parallel query against a relation on which the leader already had
AccessExclusiveLock would hang, because the workers would try to lock the same
relation and be blocked by the leader; yet the leader can't finish until it
receives completion indications from all workers. An undetected deadlock
results. This is far from the only scenario where such a problem happens. The
same thing will occur if the leader holds only AccessShareLock, the worker
seeks AccessShareLock, but between the time the leader attempts to acquire the
lock and the time the worker attempts to acquire it, some other process queues
up waiting for an AccessExclusiveLock. In this case, too, an indefinite hang
results.
It might seem that we could predict which locks the workers will attempt to
acquire and ensure before going parallel that those locks would be acquired
@ -637,18 +638,23 @@ the other is safe enough. Problems would occur if the leader initiated
parallelism from a point in the code at which it had some backend-private
state that made table access from another process unsafe, for example after
calling SetReindexProcessing and before calling ResetReindexProcessing,
catastrophe could ensue, because the worker won't have that state. Similarly,
problems could occur with certain kinds of non-relation locks, such as
relation extension locks. It's no safer for two related processes to extend
the same relation at the time than for unrelated processes to do the same.
However, since parallel mode is strictly read-only at present, neither this
nor most of the similar cases can arise at present. To allow parallel writes,
we'll either need to (1) further enhance the deadlock detector to handle those
types of locks in a different way than other types; or (2) have parallel
workers use some other mutual exclusion method for such cases; or (3) revise
those cases so that they no longer use heavyweight locking in the first place
(which is not a crazy idea, given that such lock acquisitions are not expected
to deadlock and that heavyweight lock acquisition is fairly slow anyway).
catastrophe could ensue, because the worker won't have that state.
To allow parallel inserts and parallel copy, we have ensured that relation
extension and page locks don't participate in group locking which means such
locks can conflict among the same group members. This is required as it is no
safer for two related processes to extend the same relation or perform clean up
in gin indexes at a time than for unrelated processes to do the same. We don't
acquire a heavyweight lock on any other object after relation extension lock
which means such a lock can never participate in the deadlock cycle. After
acquiring page locks, we can acquire relation extension lock but reverse never
happens, so those will also not participate in deadlock. To allow for other
parallel writes like parallel update or parallel delete, we'll either need to
(1) further enhance the deadlock detector to handle those tuple locks in a
different way than other types; or (2) have parallel workers use some other
mutual exclusion method for such cases. Currently, the parallel mode is
strictly read-only, but now we have the infrastructure to allow parallel
inserts and parallel copy.
Group locking adds three new members to each PGPROC: lockGroupLeader,
lockGroupMembers, and lockGroupLink. A PGPROC's lockGroupLeader is NULL for

View File

@ -556,11 +556,12 @@ FindLockCycleRecurseMember(PGPROC *checkProc,
lm;
/*
* The relation extension lock can never participate in actual deadlock
* cycle. See Assert in LockAcquireExtended. So, there is no advantage
* in checking wait edges from it.
* The relation extension or page lock can never participate in actual
* deadlock cycle. See Asserts in LockAcquireExtended. So, there is no
* advantage in checking wait edges from them.
*/
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND)
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND ||
(LOCK_LOCKTAG(*lock) == LOCKTAG_PAGE))
return false;
lockMethodTable = GetLocksMethodTable(lock);

View File

@ -1470,9 +1470,11 @@ LockCheckConflicts(LockMethod lockMethodTable,
}
/*
* The relation extension lock conflict even between the group members.
* The relation extension or page lock conflict even between the group
* members.
*/
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND)
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND ||
(LOCK_LOCKTAG(*lock) == LOCKTAG_PAGE))
{
PROCLOCK_PRINT("LockCheckConflicts: conflicting (group)",
proclock);

View File

@ -1078,12 +1078,12 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
/*
* If group locking is in use, locks held by members of my locking group
* need to be included in myHeldLocks. This is not required for relation
* extension lock which conflict among group members. However, including
* them in myHeldLocks will give group members the priority to get those
* locks as compared to other backends which are also trying to acquire
* those locks. OTOH, we can avoid giving priority to group members for
* that kind of locks, but there doesn't appear to be a clear advantage of
* the same.
* extension or page locks which conflict among group members. However,
* including them in myHeldLocks will give group members the priority to
* get those locks as compared to other backends which are also trying to
* acquire those locks. OTOH, we can avoid giving priority to group
* members for that kind of locks, but there doesn't appear to be a clear
* advantage of the same.
*/
if (leader != NULL)
{