Commit Graph

43040 Commits

Author SHA1 Message Date
Peter Eisentraut e0d51e3bf4 Update src/tools/pginclude/README to match recent changes to cpluspluscheck
Commit 7b8e2ae2f has turned cpluspluscheck from separate script into a
--cplusplus option for headerscheck.  Update README correspondingly.

Author: Anton Voloshin <a.voloshin@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/02e69fa9-885d-4f41-9057-15a1d212eaf8@postgrespro.ru
2024-04-18 11:37:01 +02:00
Amit Langote 2c7cea5a8e Fix object name clash in recently introduced test
c0fc075186 wasn't careful about naming the DOMAIN used in some new
tests in sqljson_queryfunc.sql so as not to clash with the name of a
DOMAIN used in the nearby sqljson_jsontable.sql.  Fix by using a
different name for the newly added DOMAIN in sqljson_queryfuncs.sql.

Per buildfarm members canebrake and urutu.

Discussion: https://postgr.es/m/CA+HiwqEjkbDxqqD3VJamc6R9+B102H7=SFYYOM7gKrxzJO35TQ@mail.gmail.com
2024-04-18 17:28:12 +09:00
Amit Langote ef744ebb73 SQL/JSON: Miscellaneous fixes and improvements
This addresses some post-commit review comments for commits 6185c973,
de3600452, and 9425c596a0, with the following changes:

* Fix JSON_TABLE() syntax documentation to use the term
  "path_expression" for JSON path expressions instead of
  "json_path_specification" to be consistent with the other SQL/JSON
  functions.

* Fix a typo in the example code in JSON_TABLE() documentation.

* Rewrite some newly added comments in jsonpath.h.

* In JsonPathQuery(), add missing cast to int before printing an enum
  value.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
2024-04-18 14:46:43 +09:00
Amit Langote c0fc075186 SQL/JSON: Fix issues with DEFAULT .. ON ERROR / EMPTY
SQL/JSON query functions allow specifying an expression to return
when either of ON ERROR or ON EMPTY condition occurs when evaluating
the JSON path expression.  The parser (transformJsonBehavior()) checks
that the specified expression is one of the supported expressions, but
there are two issues with how the check is done that are fixed in this
commit:

* No check for some expressions related to coercion, such as
  CoerceViaIO, that may appear in the transformed user-specified
  expressions that include cast(s)

* An unsupported expression may be masked by a coercion-related
  expression, which must be flagged by checking the latter's
  argument expression recursively

Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEqhqsfrg_p7EMyo5zak3d767iFDL8vz_4%3DZBHpOtrghw@mail.gmail.com
Discussion: https://postgr.es/m/CACJufxGOerH1QJknm1noh-Kz5FqU4p7QfeZSeVT2tN_4SLXYNg@mail.gmail.com
2024-04-18 14:46:35 +09:00
Amit Langote b4fad46b6b SQL/JSON: Improve some error messages
This improves some error messages emitted by SQL/JSON query functions
by mentioning column name when available, such as when they are
invoked as part of evaluating JSON_TABLE() columns.  To do so, a new
field column_name is added to both JsonFuncExpr and JsonExpr that is
only populated when creating those nodes for transformed JSON_TABLE()
columns.

While at it, relevant error messages are reworded for clarity.

Reported-by: Jian He <jian.universality@gmail.com>
Suggested-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
2024-04-18 14:45:48 +09:00
Alexander Korotkov 40126ac68f Refactoring for CommitTransactionCommand()/AbortCurrentTransaction()
fefd9a3fed turned tail recursion of CommitTransactionCommand() and
AbortCurrentTransaction() into iteration.  However, it splits the handling of
cases between different functions.

This commit puts the handling of all the cases into
AbortCurrentTransactionInternal() and CommitTransactionCommandInternal().
Now CommitTransactionCommand() and AbortCurrentTransaction() are just doing
the repeated calls of internal functions.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240415224834.w6piwtefskoh32mv%40awork3.anarazel.de
Author: Andres Freund
2024-04-18 00:29:53 +03:00
Andres Freund 3ab8cf9275 Remove GlobalVisTestNonRemovable[Full]Horizon, not used anymore
GlobalVisTestNonRemovableHorizon() was only used for the implementation of
snapshot_too_old, which was removed in f691f5b80a. As using
GlobalVisTestNonRemovableHorizon() is not particularly efficient, no new uses
for it should be added. Therefore remove.

Discussion: https://postgr.es/m/20240415185720.q4dg4dlcyvvrabz4@awork3.anarazel.de
2024-04-17 11:21:17 -07:00
Tomas Vondra 0c2f5552d5 Cleanup parallel BRIN index build code
Commit b437571714 added support for parallel builds of BRIN indexes,
using code similar to BTREE. But there were to be a couple unnecessary
differences, particularly in how the leader waits for the workers, and
merges the results. So remove these, to make the code more similar.

The leader never waited on the workersdonecv condition variable, but
simply called WaitForParallelWorkersToFinish() in _brin_end_parallel()
and then merged the per-worker results. This worked correctly, but it
seems better to do the wait and merge before _brin_end_parallel().

This commit moves the relevant code to _brin_parallel_heapscan/merge(),
which means  _brin_end_parallel() remains responsible only for exiting
the parallel mode and accumulating WAL usage data.

Discussion: https://postgr.es/m/3733d042-71e1-6ae6-5fac-00c12db62db6@enterprisedb.com
2024-04-17 18:31:46 +02:00
Peter Eisentraut ca89db5f9d Remove dead code
The configure check for HAVE_DECL_LLVMORCREGISTERPERF was removed by
e9a9843e13, but some code guarded by it was left.  (That commit
removed the "register" calls but left the "unregister" calls.)  That
code cannot be reached anymore, so remove it.

Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/5539b16c-cff7-46d5-9621-c3fb6b549e9e@iki.fi
2024-04-17 10:48:04 +02:00
Peter Eisentraut 2e75492b3c Add missing source file to libpq/nls.mk 2024-04-17 09:11:02 +02:00
Michael Paquier 91fe092a96 Fix typos with function name in event_trigger.c
Databases exist, contrary to datatabases.

Oversight in e83d1b0c40.

Author: Japin Li
Discussion: https://postgr.es/m/ME3P282MB316611A2F7BF43919F695228B6082@ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
2024-04-17 14:56:31 +09:00
Masahiko Sawada a6d0fa5ef8 Disallow specifying ON_ERROR option without value.
The ON_ERROR option of the COPY command previously allowed omitting
its value, which was inconsistent with the syntax synopsis in the
documentation and the behavior of other non-boolean COPY options.

This change enforces providing a value for the ON_ERROR option,
ensuring consistency across other non-boolean options and aligning
with the documented syntax.

Author: Atsushi Torikoshi
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/a9770bf57646d90dedc3d54cf32634b2%40oss.nttdata.com
2024-04-17 11:31:27 +09:00
David Rowley 58cf2e120e Update mmgr's README to mention BumpContext
Oversight in 29f6a959c.

In passing, since we now have 4 memory context types to choose from,
provide a brief overview of the specialities of each memory context
type.

Reported-by: Amul Sul
Author: Amul Sul, David Rowley
Discussion: https://postgr.es/m/CAAJ_b94U2s9nHh--DEK=sPEZUQ+x7vQJ7529fF8UAH97QJ9NXg@mail.gmail.com
2024-04-17 10:49:09 +12:00
David Rowley 6d2fd66b99 Push dedicated BumpBlocks to the tail of the blocks list
BumpContext relies on using the head block from its 'blocks' field to
use as the current block to allocate new chunks to.  When we receive an
allocation request larger than allocChunkLimit, we place these chunks on
a new dedicated block and, until now, we pushed the block onto the
*head* of the 'blocks' list.

This behavior caused the previous bump block to no longer be available
for new normal-sized (non-large) allocations and would result in blocks
only being partially filled if a large allocation request arrived before
the block became full.

Here adjust the code to push these dedicated blocks onto the *tail* of
the blocks list so that the head block remains intact and available to
be used by normal allocation request sizes until it becomes full.

In passing, make the elog(ERROR) calls for the unsupported callbacks
consistent.  Likewise for the header comments for those functions.

Discussion: https://postgr.es/m/CAApHDvp9___r-ayJj0nZ6GD3MeCGwGZ0_6ZptWpwj+zqHtmwCw@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvqerXpzUnuDQfUEi3DZA+9=Ud9WSt3ruxN5b6PcOosx2g@mail.gmail.com
2024-04-17 10:40:31 +12:00
Peter Eisentraut 2ea5d8bece Mark some new location fields as ParseLoc
Some new code probably didn't see 605721f819 and continued to use
type int for parse location fields.  Fix those.
2024-04-16 20:22:41 +02:00
Tom Lane ec07d0d7fa Clean up more indent breakage from 6377e12a5.
Per buildfarm member koel.
2024-04-16 13:00:40 -04:00
Tom Lane 6f0cef9353 Fix assorted bugs in ecpg's macro mechanism.
The code associated with EXEC SQL DEFINE was unreadable and full of
bugs, notably:

* It'd attempt to free a non-malloced string if the ecpg program
tries to redefine a macro that was defined on the command line.

* Possible memory stomp if user writes "-D=foo".

* Undef'ing or redefining a macro defined on the command line would
change the state visible to the next file, when multiple files are
specified on the command line.  (While possibly that could have been
an intentional choice, the code clearly intends to revert to the
original macro state; it's just failing to consider this interaction.)

* Missing "break" in defining a new macro meant that redefinition
of an existing name would cause an extra entry to be added to the
definition list.  While not immediately harmful, a subsequent undef
would result in the prior entry becoming visible again.

* The interactions with input buffering are subtle and were entirely
undocumented.

It's not that surprising that we hadn't noticed these bugs,
because there was no test coverage at all of either the -D
command line switch or multiple input files.  This patch adds
such coverage (in a rather hacky way I guess).

In addition to the code bugs, the user documentation was confused
about whether the -D switch defines a C macro or an ecpg one, and
it failed to mention that you can write "-Dsymbol=value".

These problems are old, so back-patch to all supported branches.

Discussion: https://postgr.es/m/998011.1713217712@sss.pgh.pa.us
2024-04-16 12:31:42 -04:00
Peter Geoghegan c62d2ebd9e Fix nbtree "deduce NOT NULL" scan key comment.
Oversight in commit c9c0589fda.
2024-04-16 12:04:20 -04:00
Tom Lane 984c0eccd0 Undo incorrect typedefs.list change.
6377e12a5 should not have removed AcquireSampleRowsFunc,
as that's still in use.  Per buildfarm member koel.
2024-04-16 11:38:04 -04:00
Tom Lane 03107b4eda Ensure generated join clauses for child rels have correct relids.
When building a join clause derived from an EquivalenceClass, if the
clause is to be used with an appendrel child relation then make sure
its clause_relids include the relids of that child relation.
Normally this would be true already because the EquivalenceMember
would be a Var of that relation.  However, if the appendrel represents
a flattened UNION ALL construct then some child EquivalenceMembers
could be constants with no relids.  The resulting under-marked clause
is problematic because it could mislead join_clause_is_movable_into
about where the clause should be evaluated.  We do not have an example
showing incorrect plan generation, but there are existing cases in
the regression tests that will fail the Asserts this patch adds to
get_baserel_parampathinfo.  A similarly wrong conclusion about a
clause being considered by get_joinrel_parampathinfo would lead to
wrong placement of the clause.  (This also squares with the way
that clause_relids is calculated for non-equijoin clauses in
adjust_appendrel_attrs.)

The other reason for wanting these new Asserts is that the previous
blithe assumption that the results of generate_join_implied_equalities
"necessarily satisfy join_clause_is_movable_into" turns out to be
wrong pre-v16.  If it's still wrong it'd be good to find out.

Per bug #18429 from Benoît Ryder.  The bug as filed was fixed by
commit 2489d76c4, but these changes correlate with the fix we
will need to apply in pre-v16 branches.

Discussion: https://postgr.es/m/18429-8982d4a348cc86c6@postgresql.org
2024-04-16 11:22:51 -04:00
Peter Geoghegan f698704155 Fix nbtree posting list comment.
Oversight in commit 0d861bbb70.
2024-04-16 11:20:41 -04:00
Peter Geoghegan aa1def44c3 Fix nbtree page recycling comment.
Oversight in commit e5d8a99903.
2024-04-16 11:14:36 -04:00
Alexander Korotkov 6377e12a5a revert: Generalize relation analyze in table AM interface
This commit reverts 27bc1772fc and dd1f6b0c17.  Per review by Andres Freund.

Discussion: https://postgr.es/m/20240415201057.khoyxbwwxfgzomeo%40awork3.anarazel.de
2024-04-16 13:14:20 +03:00
David Rowley bea97cd02e Improve test coverage in bump.c
There were no callers of BumpAllocLarge() in the regression tests, so
here we add a sort with a tuple large enough to use that path in bump.c.

Also, BumpStats() wasn't being called, so add a test to sysviews.sql to
call pg_backend_memory_contexts() while a bump context exists in the
backend.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240414223305.m3i5eju6zylabvln@awork3.anarazel.de
2024-04-16 16:21:31 +12:00
Michael Paquier 768ceeeaa1 Add missing PGDLLIMPORT markings
All backend-side variables should be marked with PGDLLIMPORT, as per
policy introduced in 8ec569479f.  aafc05de1b has forgotten
MyClientSocket, and 05c3980e7f LoadedSSL.

These can be spotted with a command like this one (be careful of not
switching __pg_log_level):
src/tools/mark_pgdllimport.pl $(git ls-files src/include/)

Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/ZhzkRzrkKhbeQMRm@paquier.xyz
2024-04-16 09:38:46 +09:00
Tom Lane e0df80828a Fix type-checking of RECORD-returning functions in FROM, redux.
Commit 2ed8f9a01 intended to institute a policy that if a
RangeTblFunction has a coldeflist, then the function return type is
certainly RECORD, and we should use the coldeflist as the source of
truth about what the columns of the record type are.  When the
original function has been folded to a constant, inspection of the
constant might give a different answer.  This situation will lead to
a tuple-type-mismatch error at execution, but up until that point we
need to consistently believe the coldeflist, or we'll have problems
from different bits of code reaching different conclusions.

expandRTE didn't get that memo though, and would try to produce a
tupdesc based on the constant in this situation, leading to an
assertion failure.  (Desultory testing suggests that non-assert
builds often manage to give the expected error, although I also
saw a "cache lookup failed for type 0" error, and it seems at
least possible that a crash could happen.)

Some other callers of get_expr_result_type and get_expr_result_tupdesc
were also being incautious about this.  While none of them seem to
have actual bugs, they're working harder than necessary in this case,
besides which it seems safest to have an explicit policy of not using
those functions on an RTE with a coldeflist.  Adjust the code
accordingly, and add commentary to funcapi.c about this policy.

Also fix an obsolete comment that claimed "get_expr_result_type()
doesn't know how to extract type info from a RECORD constant".
That hasn't been true since commit d57534740.

Per bug #18422 from Alexander Lakhin.
As with the previous commit, back-patch to all supported branches.

Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
2024-04-15 12:56:56 -04:00
Alvaro Herrera cee8db3f68
ATTACH PARTITION: Don't match a PK with a UNIQUE constraint
When matching constraints in AttachPartitionEnsureIndexes() we weren't
testing the constraint type, which could make a UNIQUE key lacking a
not-null constraint incorrectly satisfy a primary key requirement.  Fix
this by testing that the constraint types match.  (Other possible
mismatches are verified by comparing index properties.)

Discussion: https://postgr.es/m/202402051447.wimb4xmtiiyb@alvherre.pgsql
2024-04-15 15:07:47 +02:00
Alexander Korotkov 9dfcac8e15 Grammar fixes for split/merge partitions code
The fixes relate to comments, error messages, and corresponding expected output
of regression tests.

Discussion: https://postgr.es/m/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.com
Discussion: https://postgr.es/m/86bfd241-a58c-479a-9a72-2c67a02becf8%40postgrespro.ru
Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com
Author: Richard Guo, Dmitry Koval, Tender Wang
2024-04-15 16:00:02 +03:00
Alvaro Herrera c3709100be
Fix propagating attnotnull in multiple inheritance
In one of the many strange corner cases of multiple inheritance being
used, commit b0e96f3119 missed a CommandCounterIncrement() call after
updating the attnotnull flag during ALTER TABLE ADD COLUMN, which caused
a catalog tuple to be update attempted twice in the same command, giving
rise to a "tuple already updated by self" error.  Add the missing call
to solve that, and a test case that reproduces the scenario.

As a (perhaps surprising) secondary effect, this CCI addition triggers
another behavior change: when a primary key is added to a parent
partitioned table and the column in an existing partition does not have
a not-null constraint, we no longer error out.  This will probably be a
welcome change by some users, and I think it's unlikely that anybody
will miss the old behavior.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: http://postgr.es/m/045dec3f-9b3d-aa44-0c99-85f6992306c7@gmail.com
2024-04-15 12:20:56 +02:00
Peter Eisentraut 6ff21c0530 psql: Make output of \dD more stable
\dD showed domain check constraints in arbitrary order, which can
cause regression test failures, which was exposed by commit
9895b35cb8.  To fix, order the constraints by conname, which matches
what psql does in other queries listing constraints.
2024-04-15 09:28:48 +02:00
Peter Eisentraut 9895b35cb8 Fix ALTER DOMAIN NOT NULL syntax
This addresses a few problems with commit e5da0fe3c2 ("Catalog domain
not-null constraints").

In CREATE DOMAIN, a NOT NULL constraint looks like

    CREATE DOMAIN d1 AS int [ CONSTRAINT conname ] NOT NULL

(Before e5da0fe3c2, the constraint name was accepted but ignored.)

But in ALTER DOMAIN, a NOT NULL constraint looks like

    ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL VALUE

where VALUE is where for a table constraint the column name would be.
(This works as of e5da0fe3c2.  Before e5da0fe3c2, this syntax
resulted in an internal error.)

But for domains, this latter syntax is confusing and needlessly
inconsistent between CREATE and ALTER.  So this changes it to just

    ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL

(None of these syntaxes are per SQL standard; we are just living with
the bits of inconsistency that have built up over time.)

In passing, this also changes the psql \dD output to not show not-null
constraints in the column "Check", since it's already shown in the
column "Nullable".  This has also been off since e5da0fe3c2.

Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/9ec24d7b-633d-463a-84c6-7acff769c9e8%40eisentraut.org
2024-04-15 08:34:45 +02:00
Heikki Linnakangas d21d61b96f Put back initialization of 'sslmode', to silence Coverity
Coverity pointed out that the function checks for conn->sslmode !=
NULL, which implies that it might be NULL, but later we access it
without a NULL-check anyway. It doesn't know that it is in fact always
initialized earlier, in conninfo_add_defaults(), and hence the
NULL-check is not necessary. However, there is a lot of distance
between conninfo_add_defaults() and pqConnectOptions2(), so it's not
surprising that it doesn't see that. Put back the initialization code,
as it existed before commit 05fd30c0e7, to silence the warning.

In the long run, I'd like to refactor the libpq options handling and
initalization code. It seems silly to strdup() and copy strings, for
things like sslmode that have a limited set of possible values; it
should be an enum. But that's for another day.
2024-04-14 23:02:43 +03:00
Tomas Vondra cd4b6af620 Fix unnecessary padding in incremental backups
Commit 10e3226ba1 added padding to incremental backups to ensure the
block data is properly aligned. The code in sendFile() however failed to
consider that the header may be a multiple of BLCKSZ and thus already
aligned, adding a full BLCKSZ of unnecessary padding.

Not only does this make the incremental file a bit larger, but the other
places calculating the amount of padding did realize it's not needed and
did not include it in the formula. This resulted in pg_basebackup
getting confused while parsing the data stream, trying to access files
with invalid filenames (e.g. with binary data etc.) and failing.
2024-04-14 20:37:49 +02:00
Tomas Vondra bb616ed3e6 Use the correct PG_DETOAST_DATUM macro in BRIN
Commit 6bcda4a721 replaced PG_DETOAST_DATUM with PG_DETOAST_DATUM_PACKED
in two BRIN output functions, for minmax-multi and bloom opclasses. But
this is incorrect - the code is accessing the data through structs that
already include a 4B header, so the detoast needs to match that. But the
PACKED macro may keep the 1B header, which means the struct fields will
point to incorrect data.

Backpatch-through: 16
Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
2024-04-14 18:19:58 +02:00
Tomas Vondra 2f20ced1eb Update nbits_set in brin_bloom_union
Properly update the number of bits set in the bitmap after merging the
filters in brin_bloom_union.

This is mostly harmless, as the counter is used only in the output
function, which means pageinspect may show incorrect information about
the BRIN summary. The counter does not affect correctness.

Discovered while adding a regression test comparing indexes built with
and without parallelism. The parallel index builds exercise the union
procedure when merging results from workers, which is otherwise very
hard to do in a test. Which is why this went unnoticed until now.

Backpatch through 14, where the BRIN bloom opclasses were introduced.

Backpatch-through: 14
Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
2024-04-14 18:07:15 +02:00
Noah Misch 9358297431 freespace: Don't return blocks past the end of the main fork.
GetPageWithFreeSpace() callers assume the returned block exists in the
main fork, failing with "could not read block" errors if that doesn't
hold.  Make that assumption reliable now.  It hadn't been guaranteed,
due to the weak WAL and data ordering of participating components.  Most
operations on the fsm fork are not WAL-logged.  Relation extension is
not WAL-logged.  Hence, an fsm-fork block on disk can reference a
main-fork block that no WAL record has initialized.  That could happen
after an OS crash, a replica promote, or a PITR restore.  wal_log_hints
makes the trouble easier to hit; a replica promote or PITR ending just
after a relevant fsm-fork FPI_FOR_HINT may yield this broken state.  The
v16 RelationAddBlocks() mechanism also makes the trouble easier to hit,
since it bulk-extends even without extension lock waiters.  Commit
917dc7d239 stopped trouble around
truncation, but vectors involving PageIsNew() pages remained.

This implementation adds a RelationGetNumberOfBlocks() call when the
cached relation size doesn't confirm a block exists.  We've been unable
to identify a benchmark that slows materially, but this may show up as
additional time in lseek().  An alternative without that overhead would
be a new ReadBufferMode such that ReadBufferExtended() returns NULL
after a 0-byte read, with all other errors handled normally.  However,
each GetFreeIndexPage() caller would then need code for the return-NULL
case.  Back-patch to v14, due to earlier versions not caching relation
size and the absence of a pre-v16 problem report.

Ronan Dunklau.  Reported by Ronan Dunklau.

Discussion: https://postgr.es/m/1878547.tdWV9SEqCh%40aivenlaptop
2024-04-13 08:34:20 -07:00
Heikki Linnakangas 4cc1c76fe9 Document PG_TEST_EXTRA=libpq_encryption and also check 'kerberos'
In the libpq encryption negotiation tests, don't run the GSSAPI tests
unless PG_TEST_EXTRA='kerberos' is also set. That makes it possible to
still run most of the tests when GSSAPI support is compiled in, but
there's no MIT Kerberos installation.
2024-04-12 19:52:39 +03:00
Heikki Linnakangas 65dfe9d167 Move libpq encryption negotiation tests
The test targets libpq's options, so 'src/test/interfaces/libpq/t' is
a more natural place for it.

While doing this, I noticed that I had missed adding the
libpq_encryption subdir to the Makefile. That's why this commit only
needs to remove it from the meson.build file.

Per Peter Eisentraut's suggestion.

Discussion: https://www.postgresql.org/message-id/09d4bf5d-d0fa-4c66-a1d7-5ec757609646@eisentraut.org
2024-04-12 19:52:37 +03:00
Heikki Linnakangas 084cae5513 Fix compilation with --with-gssapi --without-openssl
The #define is spelled ENABLE_GSS, not USE_GSS. Introduced in commit
05fd30c0e7, reported by Thomas Munro.

Discussion: https://www.postgresql.org/message-id/CA%2BhUKG%2BHRTtB%2Bx%2BKKKj_cfX6sNhbeGuqmGxjGMwdVPG7YGFP8w@mail.gmail.com
2024-04-12 19:52:34 +03:00
Heikki Linnakangas 0a5f229189 Fix libpq_encryption tests when compiled without SSL support
It correctly skipped tests involving SSL in the server when SSL
support was not compiled in, but even when SSL is not enabled in the
server and the connection is established without SSL, libpq behaves
differently in many of the test scenarios when libpq is compiled
without SSL support. For example, with sslmode=prefer, if libpq is
compiled with SSL support it will attempt to use SSL, but without SSL
support it will try authenticating in plaintext mode directly. The
expected test output didn't take that into account.

Discussion: https://www.postgresql.org/message-id/CA%2BhUKG%2BHRTtB%2Bx%2BKKKj_cfX6sNhbeGuqmGxjGMwdVPG7YGFP8w@mail.gmail.com
2024-04-12 19:52:28 +03:00
Andrew Dunstan 929c05774b Don't allocate large buffer on the stack in pg_verifybackup
Per complaint from Andres Freund. Follow his suggestion to allocate the
buffer once in the calling routine instead.

Also make a tiny indentation improvement.

Discussion: https://postgr.es/m/20240411190147.a3yries632olfcgg@awork3.anarazel.de
2024-04-12 10:52:25 -04:00
Andrew Dunstan 42fa4b6601 Assorted minor cleanups in the test_json_parser module
Per gripes from Michael Paquier

Discussion: https://postgr.es/m/ZhTQ6_w1vwOhqTQI@paquier.xyz

Along the way, also clean up a handful of typos in 3311ea86ed and
ea7b4e9a2a, found by Alexander Lakhin, and a couple of stylistic
snafus noted by Daniel Westermann and Daniel Gustafsson.
2024-04-12 10:32:30 -04:00
Andrew Dunstan b8a7bfa333 Shrink test file for test_json_parser module
Also delete live URLs

Jacob Champion

Discussion: https://postgr.es/m/CAOYmi+mtH=V1wZKAOauCd5QqQWr61hnXMJbJ9h-CZXAa1JXd3w@mail.gmail.com
2024-04-12 10:32:30 -04:00
Andrew Dunstan daf554dbea Add a TAP test for test_json_parser_perf
This just makes sure the test can run with a single iteration. A real
performance test would test with many more.
2024-04-12 10:32:30 -04:00
Andrew Dunstan 661ab4e185 Fix some memory leaks associated with parsing json and manifests
Coverity complained about not freeing some memory associated with
incrementally parsing backup manifests. To fix that, provide and use a new
shutdown function for the JsonManifestParseIncrementalState object, in
line with a suggestion from Tom Lane.

While analysing the problem, I noticed a buglet in freeing memory for
incremental json lexers. To fix that remove a bogus condition on
freeing the memory allocated for them.
2024-04-12 10:32:30 -04:00
David Rowley b9ecefecc7 Fix recently introduced typo in code comment
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49kAsZUsj7-0SBLvE9+uKz0RCqMEmM3NVytc1YvS8sTrQ@mail.gmail.com
2024-04-12 23:15:52 +12:00
Amit Kapila 3741f2a09d Fix the review comments and a bug in the slot sync code.
Ensure that when updating the catalog_xmin of the synced slots, it is
first written to disk before changing the in-memory value
(effective_catalog_xmin). This is to prevent a scenario where the
in-memory value change triggers a vacuum to remove catalog tuples before
the catalog_xmin is written to disk. In the event of a crash before the
catalog_xmin is persisted, we would not know that some required catalog
tuples have been removed and the synced slot would be invalidated.

Change the sanity check to ensure that remote_slot's confirmed_flush LSN
can't precede the local/synced slot during slot sync. Note that the
restart_lsn of the synced/local slot can be ahead of remote_slot. This can
happen when slot advancing machinery finds a running xacts record after
reaching the consistent state at a later point than the primary where it
serializes the snapshot and updates the restart_lsn.

Make the check to sync slots robust by allowing to sync only when the
confirmed_lsn, restart_lsn, or catalog_xmin of the remote slot is ahead of
the synced/local slot.

Reported-by: Amit Kapila and Shveta Malik
Author: Hou Zhijie, Shveta Malik
Reviewed-by: Amit Kapila, Bertrand Drouvot
Discussion: https://postgr.es/m/OS0PR01MB57162B67D3CB01B2756FBA6D94062@OS0PR01MB5716.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/CAJpy0uCSS5zmdyUXhvw41HSdTbRqX1hbYqkOfHNj7qQ+2zn0AQ@mail.gmail.com
2024-04-12 15:10:41 +05:30
David Rowley 3af7040985 Fix IS [NOT] NULL qual optimization for inheritance tables
b262ad440 added code to have the planner remove redundant IS NOT NULL
quals and eliminate needless scans for IS NULL quals on tables where the
qual's column has a NOT NULL constraint.

That commit failed to consider that an inheritance parent table could
have differing NOT NULL constraints between the parent and the child.
This caused issues as if we eliminated a qual on the parent, when
applying the quals to child tables in apply_child_basequals(), the qual
might not have been added to the parent's baserestrictinfo.

Here we fix this by not applying the optimization to remove redundant
quals to RelOptInfos belonging to inheritance parents and applying the
optimization again in apply_child_basequals().  Effectively, this means
that the parent and child are considered independently as the parent has
both an inh=true and inh=false RTE and we still apply the optimization
to the RelOptInfo corresponding to the inh=false RTE.

We're able to still apply the optimization in add_base_clause_to_rel()
for partitioned tables as the NULLability of partitions must match that
of their parent.  And, if we ever expand restriction_is_always_false()
and restriction_is_always_true() to handle partition constraints then we
can apply the same logic as, even in multi-level partitioned tables,
there's no way to route values to a partition when the qual does not
match the partition qual of the partitioned table's parent partition.
The same is true for CHECK constraints as those must also match between
arent partitioned tables and their partitions.

Author: Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAMbWs4930gQSZmjR7aANzEapdy61gCg6z8dT-kAEYD0sYWKPdQ@mail.gmail.com
2024-04-12 20:07:53 +12:00
Alexander Korotkov 772faafca1 Revert: Implement pg_wal_replay_wait() stored procedure
This commit reverts 06c418e163, e37662f221, bf1e650806, 25f42429e2,
ee79928441, and 74eaf66f98 per review by Heikki Linnakangas.

Discussion: https://postgr.es/m/b155606b-e744-4218-bda5-29379779da1a%40iki.fi
2024-04-11 17:28:15 +03:00
Alexander Korotkov 922c4c461d Revert: Allow table AM to store complex data structures in rd_amcache
This commit reverts 02eb07ea89 per review by Andres Freund.

Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
2024-04-11 16:02:49 +03:00