Rename force_parallel_mode to debug_parallel_query

force_parallel_mode is meant to be used to allow us to exercise the
parallel query infrastructure to ensure that it's working as we expect.
It seems some users think this GUC is for forcing the query planner into
picking a parallel plan regardless of the costs.  A quick look at the
documentation would have made them realize that they were wrong, but the
GUC is likely too conveniently named which, evidently, seems to often
result in users expecting that it forces the planner into usefully
parallelizing queries.

Here we rename the GUC to something which casual users are less likely to
mistakenly think is what they need to make their query run more quickly.

For now, the old name can still be used.  We'll revisit if the old name
mapping can be removed once the buildfarm configs are all updated.

Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CAApHDvrsOi92_uA7PEaHZMH-S4Xv+MGhQWA+GrP8b1kjpS1HjQ@mail.gmail.com
This commit is contained in:
David Rowley 2023-02-15 21:21:59 +13:00
parent 8e0e0698f1
commit 5352ca22e0
23 changed files with 79 additions and 76 deletions

View File

@ -11091,17 +11091,17 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
</listitem>
</varlistentry>
<varlistentry id="guc-force-parallel-mode" xreflabel="force_parallel_mode">
<term><varname>force_parallel_mode</varname> (<type>enum</type>)
<varlistentry id="guc-debug-parallel-query" xreflabel="debug_parallel_query">
<term><varname>debug_parallel_query</varname> (<type>enum</type>)
<indexterm>
<primary><varname>force_parallel_mode</varname> configuration parameter</primary>
<primary><varname>debug_parallel_query</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Allows the use of parallel queries for testing purposes even in cases
where no performance benefit is expected.
The allowed values of <varname>force_parallel_mode</varname> are
The allowed values of <varname>debug_parallel_query</varname> are
<literal>off</literal> (use parallel mode only when it is expected to improve
performance), <literal>on</literal> (force parallel query for all queries
for which it is thought to be safe), and <literal>regress</literal> (like

View File

@ -370,7 +370,7 @@ make check LANG=C ENCODING=EUC_JP
set in the <varname>PGOPTIONS</varname> environment variable (for settings
that allow this):
<screen>
make check PGOPTIONS="-c force_parallel_mode=regress -c work_mem=50MB"
make check PGOPTIONS="-c debug_parallel_query=regress -c work_mem=50MB"
</screen>
When running against a temporary installation, custom settings can also be
set by supplying a pre-written <filename>postgresql.conf</filename>:

View File

@ -1152,11 +1152,11 @@ HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg)
* If desired, add a context line to show that this is a
* message propagated from a parallel worker. Otherwise, it
* can sometimes be confusing to understand what actually
* happened. (We don't do this in FORCE_PARALLEL_REGRESS mode
* happened. (We don't do this in DEBUG_PARALLEL_REGRESS mode
* because it causes test-result instability depending on
* whether a parallel worker is actually used or not.)
*/
if (force_parallel_mode != FORCE_PARALLEL_REGRESS)
if (debug_parallel_query != DEBUG_PARALLEL_REGRESS)
{
if (edata.context)
edata.context = psprintf("%s\n%s", edata.context,

View File

@ -756,8 +756,8 @@ ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc)
/*
* Sometimes we mark a Gather node as "invisible", which means that it's
* not to be displayed in EXPLAIN output. The purpose of this is to allow
* running regression tests with force_parallel_mode=regress to get the
* same results as running the same tests with force_parallel_mode=off.
* running regression tests with debug_parallel_query=regress to get the
* same results as running the same tests with debug_parallel_query=off.
* Such marking is currently only supported on a Gather at the top of the
* plan. We skip that node, and we must also hide per-worker detail data
* further down in the plan tree.

View File

@ -114,12 +114,12 @@ query_planner(PlannerInfo *root,
* Anything parallel-restricted in the query tlist will be
* dealt with later.) This is normally pretty silly, because
* a Result-only plan would never be interesting to
* parallelize. However, if force_parallel_mode is on, then
* parallelize. However, if debug_parallel_query is on, then
* we want to execute the Result in a parallel worker if
* possible, so we must do this.
*/
if (root->glob->parallelModeOK &&
force_parallel_mode != FORCE_PARALLEL_OFF)
debug_parallel_query != DEBUG_PARALLEL_OFF)
final_rel->consider_parallel =
is_parallel_safe(root, parse->jointree->quals);

View File

@ -70,7 +70,7 @@
/* GUC parameters */
double cursor_tuple_fraction = DEFAULT_CURSOR_TUPLE_FRACTION;
int force_parallel_mode = FORCE_PARALLEL_OFF;
int debug_parallel_query = DEBUG_PARALLEL_OFF;
bool parallel_leader_participation = true;
/* Hook for plugins to get control in planner() */
@ -364,12 +364,12 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
* true during plan creation if a Gather or Gather Merge plan is actually
* created (cf. create_gather_plan, create_gather_merge_plan).
*
* However, if force_parallel_mode = on or force_parallel_mode = regress,
* then we impose parallel mode whenever it's safe to do so, even if the
* final plan doesn't use parallelism. It's not safe to do so if the
* query contains anything parallel-unsafe; parallelModeOK will be false
* in that case. Note that parallelModeOK can't change after this point.
* Otherwise, everything in the query is either parallel-safe or
* However, if debug_parallel_query = on or debug_parallel_query =
* regress, then we impose parallel mode whenever it's safe to do so, even
* if the final plan doesn't use parallelism. It's not safe to do so if
* the query contains anything parallel-unsafe; parallelModeOK will be
* false in that case. Note that parallelModeOK can't change after this
* point. Otherwise, everything in the query is either parallel-safe or
* parallel-restricted, and in either case it should be OK to impose
* parallel-mode restrictions. If that ends up breaking something, then
* either some function the user included in the query is incorrectly
@ -377,7 +377,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
* parallel-unsafe, or else the query planner itself has a bug.
*/
glob->parallelModeNeeded = glob->parallelModeOK &&
(force_parallel_mode != FORCE_PARALLEL_OFF);
(debug_parallel_query != DEBUG_PARALLEL_OFF);
/* Determine what fraction of the plan is likely to be scanned */
if (cursorOptions & CURSOR_OPT_FAST_PLAN)
@ -431,7 +431,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
* Optionally add a Gather node for testing purposes, provided this is
* actually a safe thing to do.
*/
if (force_parallel_mode != FORCE_PARALLEL_OFF && top_plan->parallel_safe)
if (debug_parallel_query != DEBUG_PARALLEL_OFF && top_plan->parallel_safe)
{
Gather *gather = makeNode(Gather);
@ -449,7 +449,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
gather->plan.righttree = NULL;
gather->num_workers = 1;
gather->single_copy = true;
gather->invisible = (force_parallel_mode == FORCE_PARALLEL_REGRESS);
gather->invisible = (debug_parallel_query == DEBUG_PARALLEL_REGRESS);
/*
* Since this Gather has no parallel-aware descendants to signal to,

View File

@ -186,6 +186,7 @@ static const unit_conversion time_unit_conversion_table[] =
static const char *const map_old_guc_names[] = {
"sort_mem", "work_mem",
"vacuum_mem", "maintenance_work_mem",
"force_parallel_mode", "debug_parallel_query",
NULL
};

View File

@ -360,16 +360,16 @@ static const struct config_enum_entry recovery_prefetch_options[] = {
{NULL, 0, false}
};
static const struct config_enum_entry force_parallel_mode_options[] = {
{"off", FORCE_PARALLEL_OFF, false},
{"on", FORCE_PARALLEL_ON, false},
{"regress", FORCE_PARALLEL_REGRESS, false},
{"true", FORCE_PARALLEL_ON, true},
{"false", FORCE_PARALLEL_OFF, true},
{"yes", FORCE_PARALLEL_ON, true},
{"no", FORCE_PARALLEL_OFF, true},
{"1", FORCE_PARALLEL_ON, true},
{"0", FORCE_PARALLEL_OFF, true},
static const struct config_enum_entry debug_parallel_query_options[] = {
{"off", DEBUG_PARALLEL_OFF, false},
{"on", DEBUG_PARALLEL_ON, false},
{"regress", DEBUG_PARALLEL_REGRESS, false},
{"true", DEBUG_PARALLEL_ON, true},
{"false", DEBUG_PARALLEL_OFF, true},
{"yes", DEBUG_PARALLEL_ON, true},
{"no", DEBUG_PARALLEL_OFF, true},
{"1", DEBUG_PARALLEL_ON, true},
{"0", DEBUG_PARALLEL_OFF, true},
{NULL, 0, false}
};
@ -4852,13 +4852,15 @@ struct config_enum ConfigureNamesEnum[] =
},
{
{"force_parallel_mode", PGC_USERSET, DEVELOPER_OPTIONS,
gettext_noop("Forces use of parallel query facilities."),
gettext_noop("If possible, run query using a parallel worker and with parallel restrictions."),
{"debug_parallel_query", PGC_USERSET, DEVELOPER_OPTIONS,
gettext_noop("Forces the planner's use parallel query nodes."),
gettext_noop("This can be useful for testing the parallel query infrastructure "
"by forcing the planner to generate plans which contains nodes "
"which perform tuple communication between workers and the main process."),
GUC_NOT_IN_SAMPLE | GUC_EXPLAIN
},
&force_parallel_mode,
FORCE_PARALLEL_OFF, force_parallel_mode_options,
&debug_parallel_query,
DEBUG_PARALLEL_OFF, debug_parallel_query_options,
NULL, NULL, NULL
},

View File

@ -99,16 +99,16 @@ extern bool is_pseudo_constant_for_index(PlannerInfo *root, Node *expr,
/* in plan/planner.c: */
/* possible values for force_parallel_mode */
/* possible values for debug_parallel_query */
typedef enum
{
FORCE_PARALLEL_OFF,
FORCE_PARALLEL_ON,
FORCE_PARALLEL_REGRESS
} ForceParallelMode;
DEBUG_PARALLEL_OFF,
DEBUG_PARALLEL_ON,
DEBUG_PARALLEL_REGRESS
} DebugParallelMode;
/* GUC parameters */
extern PGDLLIMPORT int force_parallel_mode;
extern PGDLLIMPORT int debug_parallel_query;
extern PGDLLIMPORT bool parallel_leader_participation;
extern struct PlannedStmt *planner(Query *parse, const char *query_string,

View File

@ -8066,7 +8066,7 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan)
/*
* Ordinarily, the plan node should be a simple Result. However, if
* force_parallel_mode is on, the planner might've stuck a Gather node
* debug_parallel_query is on, the planner might've stuck a Gather node
* atop that. The simplest way to deal with this is to look through the
* Gather node. The Gather node's tlist would normally contain a Var
* referencing the child node's output, but it could also be a Param, or

View File

@ -15,21 +15,21 @@ lock_share
step e1l: SELECT lock_excl(1,x) FROM bigt LIMIT 1; <waiting ...>
step e2l: SELECT lock_excl(2,x) FROM bigt LIMIT 1; <waiting ...>
step d1a2: SET force_parallel_mode = on;
step d1a2: SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
SET parallel_leader_participation = off;
SET max_parallel_workers_per_gather = 3;
SELECT sum(lock_share(2,x)) FROM bigt; <waiting ...>
step d2a1: SET force_parallel_mode = on;
step d2a1: SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
SET parallel_leader_participation = off;
SET max_parallel_workers_per_gather = 3;
SELECT sum(lock_share(1,x)) FROM bigt;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
RESET parallel_setup_cost;
RESET parallel_tuple_cost;
SELECT lock_share(3,x) FROM bigt LIMIT 1; <waiting ...>

View File

@ -6,7 +6,7 @@
# and is incorrectly marked parallel-safe so that it can execute in a worker.
# Note that we explicitly override any global settings of isolation level
# or force_parallel_mode, to ensure we're testing what we intend to.
# or debug_parallel_query, to ensure we're testing what we intend to.
# Otherwise, this is morally equivalent to deadlock-soft.spec:
# Four-process deadlock with two hard edges and two soft edges.
@ -55,13 +55,13 @@ teardown
session d1
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
SET deadlock_timeout = '10s';
}
# these locks will be taken in the leader, so they will persist:
step d1a1 { SELECT lock_share(1,x), lock_excl(3,x) FROM bigt LIMIT 1; }
# this causes all the parallel workers to take locks:
step d1a2 { SET force_parallel_mode = on;
step d1a2 { SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
@ -72,21 +72,21 @@ step d1c { COMMIT; }
session d2
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
SET deadlock_timeout = '10ms';
}
# this lock will be taken in the leader, so it will persist:
step d2a2 { select lock_share(2,x) FROM bigt LIMIT 1; }
# this causes all the parallel workers to take locks;
# after which, make the leader take lock 3 to prevent client-driven deadlock
step d2a1 { SET force_parallel_mode = on;
step d2a1 { SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
SET parallel_leader_participation = off;
SET max_parallel_workers_per_gather = 3;
SELECT sum(lock_share(1,x)) FROM bigt;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
RESET parallel_setup_cost;
RESET parallel_tuple_cost;
SELECT lock_share(3,x) FROM bigt LIMIT 1; }
@ -94,7 +94,7 @@ step d2c { COMMIT; }
session e1
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = on;
SET debug_parallel_query = on;
SET deadlock_timeout = '10s';
}
# this lock will be taken in a parallel worker, but we don't need it to persist
@ -103,7 +103,7 @@ step e1c { COMMIT; }
session e2
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = on;
SET debug_parallel_query = on;
SET deadlock_timeout = '10s';
}
# this lock will be taken in a parallel worker, but we don't need it to persist

View File

@ -35,7 +35,7 @@ step s2c { COMMIT; }
session s3
setup {
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET force_parallel_mode = on;
SET debug_parallel_query = on;
}
step s3r { SELECT id, balance FROM bank_account WHERE id IN ('X', 'Y') ORDER BY id; }
step s3c { COMMIT; }

View File

@ -1760,9 +1760,9 @@ main(int argc, char **argv)
res = PQexec(conn, "SET lc_messages TO \"C\"");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
pg_fatal("failed to set lc_messages: %s", PQerrorMessage(conn));
res = PQexec(conn, "SET force_parallel_mode = off");
res = PQexec(conn, "SET debug_parallel_query = off");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
pg_fatal("failed to set force_parallel_mode: %s", PQerrorMessage(conn));
pg_fatal("failed to set debug_parallel_query: %s", PQerrorMessage(conn));
/* Set the trace file, if requested */
if (tracefile != NULL)

View File

@ -233,7 +233,7 @@ emit_audit_message(const char *type, const char *hook, char *action, char *objNa
/*
* Ensure that audit messages are not duplicated by only emitting them
* from a leader process, not a worker process. This makes the test
* results deterministic even if run with force_parallel_mode = regress.
* results deterministic even if run with debug_parallel_query = regress.
*/
if (REGRESS_audit && !IsParallelWorker())
{

View File

@ -3257,7 +3257,7 @@ select array[1,3] <@ arraymultirange(arrayrange(array[1,2], array[2,1]));
--
create type two_ints as (a int, b int);
create type two_ints_range as range (subtype = two_ints);
-- with force_parallel_mode on, this exercises tqueue.c's range remapping
-- with debug_parallel_query on, this exercises tqueue.c's range remapping
select *, row_to_json(upper(t)) as u from
(values (two_ints_multirange(two_ints_range(row(1,2), row(3,4)))),
(two_ints_multirange(two_ints_range(row(5,6), row(7,8))))) v(t);

View File

@ -1743,7 +1743,7 @@ select array[1,3] <@ arrayrange(array[1,2], array[2,1]);
--
create type two_ints as (a int, b int);
create type two_ints_range as range (subtype = two_ints);
-- with force_parallel_mode on, this exercises tqueue.c's range remapping
-- with debug_parallel_query on, this exercises tqueue.c's range remapping
select *, row_to_json(upper(t)) as u from
(values (two_ints_range(row(1,2), row(3,4))),
(two_ints_range(row(5,6), row(7,8)))) v(t);

File diff suppressed because one or more lines are too long

View File

@ -2671,7 +2671,7 @@ CREATE TEMP TABLE bug6051_3 AS
SELECT a FROM generate_series(11,13) AS a;
CREATE RULE bug6051_3_ins AS ON INSERT TO bug6051_3 DO INSTEAD
SELECT i FROM bug6051_2;
BEGIN; SET LOCAL force_parallel_mode = on;
BEGIN; SET LOCAL debug_parallel_query = on;
WITH t1 AS ( DELETE FROM bug6051_3 RETURNING * )
INSERT INTO bug6051_3 SELECT * FROM t1;
i

View File

@ -789,7 +789,7 @@ select array[1,3] <@ arraymultirange(arrayrange(array[1,2], array[2,1]));
create type two_ints as (a int, b int);
create type two_ints_range as range (subtype = two_ints);
-- with force_parallel_mode on, this exercises tqueue.c's range remapping
-- with debug_parallel_query on, this exercises tqueue.c's range remapping
select *, row_to_json(upper(t)) as u from
(values (two_ints_multirange(two_ints_range(row(1,2), row(3,4)))),
(two_ints_multirange(two_ints_range(row(5,6), row(7,8))))) v(t);

View File

@ -561,7 +561,7 @@ select array[1,3] <@ arrayrange(array[1,2], array[2,1]);
create type two_ints as (a int, b int);
create type two_ints_range as range (subtype = two_ints);
-- with force_parallel_mode on, this exercises tqueue.c's range remapping
-- with debug_parallel_query on, this exercises tqueue.c's range remapping
select *, row_to_json(upper(t)) as u from
(values (two_ints_range(row(1,2), row(3,4))),
(two_ints_range(row(5,6), row(7,8)))) v(t);

View File

@ -344,7 +344,7 @@ reset parallel_leader_participation;
reset max_parallel_workers;
SAVEPOINT settings;
SET LOCAL force_parallel_mode = 1;
SET LOCAL debug_parallel_query = 1;
explain (costs off)
select stringu1::int2 from tenk1 where unique1 = 1;
ROLLBACK TO SAVEPOINT settings;
@ -364,7 +364,7 @@ BEGIN
END;
$$;
SAVEPOINT settings;
SET LOCAL force_parallel_mode = 1;
SET LOCAL debug_parallel_query = 1;
SELECT make_record(x) FROM (SELECT generate_series(1, 5) x) ss ORDER BY x;
ROLLBACK TO SAVEPOINT settings;
DROP function make_record(n int);
@ -375,9 +375,9 @@ create role regress_parallel_worker;
set role regress_parallel_worker;
reset session authorization;
drop role regress_parallel_worker;
set force_parallel_mode = 1;
set debug_parallel_query = 1;
select count(*) from tenk1;
reset force_parallel_mode;
reset debug_parallel_query;
reset role;
-- Window function calculation can't be pushed to workers.
@ -393,14 +393,14 @@ explain (costs off)
-- to increase the parallel query test coverage
SAVEPOINT settings;
SET LOCAL force_parallel_mode = 1;
SET LOCAL debug_parallel_query = 1;
EXPLAIN (analyze, timing off, summary off, costs off) SELECT * FROM tenk1;
ROLLBACK TO SAVEPOINT settings;
-- provoke error in worker
-- (make the error message long enough to require multiple bufferloads)
SAVEPOINT settings;
SET LOCAL force_parallel_mode = 1;
SET LOCAL debug_parallel_query = 1;
select (stringu1 || repeat('abcd', 5000))::int2 from tenk1 where unique1 = 1;
ROLLBACK TO SAVEPOINT settings;

View File

@ -1246,7 +1246,7 @@ CREATE TEMP TABLE bug6051_3 AS
CREATE RULE bug6051_3_ins AS ON INSERT TO bug6051_3 DO INSTEAD
SELECT i FROM bug6051_2;
BEGIN; SET LOCAL force_parallel_mode = on;
BEGIN; SET LOCAL debug_parallel_query = on;
WITH t1 AS ( DELETE FROM bug6051_3 RETURNING * )
INSERT INTO bug6051_3 SELECT * FROM t1;