Make heap TID a tiebreaker nbtree index column.

Make nbtree treat all index tuples as having a heap TID attribute.
Index searches can distinguish duplicates by heap TID, since heap TID is
always guaranteed to be unique.  This general approach has numerous
benefits for performance, and is prerequisite to teaching VACUUM to
perform "retail index tuple deletion".

Naively adding a new attribute to every pivot tuple has unacceptable
overhead (it bloats internal pages), so suffix truncation of pivot
tuples is added.  This will usually truncate away the "extra" heap TID
attribute from pivot tuples during a leaf page split, and may also
truncate away additional user attributes.  This can increase fan-out,
especially in a multi-column index.  Truncation can only occur at the
attribute granularity, which isn't particularly effective, but works
well enough for now.  A future patch may add support for truncating
"within" text attributes by generating truncated key values using new
opclass infrastructure.

Only new indexes (BTREE_VERSION 4 indexes) will have insertions that
treat heap TID as a tiebreaker attribute, or will have pivot tuples
undergo suffix truncation during a leaf page split (on-disk
compatibility with versions 2 and 3 is preserved).  Upgrades to version
4 cannot be performed on-the-fly, unlike upgrades from version 2 to
version 3.  contrib/amcheck continues to work with version 2 and 3
indexes, while also enforcing stricter invariants when verifying version
4 indexes.  These stricter invariants are the same invariants described
by "3.1.12 Sequencing" from the Lehman and Yao paper.

A later patch will enhance the logic used by nbtree to pick a split
point.  This patch is likely to negatively impact performance without
smarter choices around the precise point to split leaf pages at.  Making
these two mostly-distinct sets of enhancements into distinct commits
seems like it might clarify their design, even though neither commit is
particularly useful on its own.

The maximum allowed size of new tuples is reduced by an amount equal to
the space required to store an extra MAXALIGN()'d TID in a new high key
during leaf page splits.  The user-facing definition of the "1/3 of a
page" restriction is already imprecise, and so does not need to be
revised.  However, there should be a compatibility note in the v12
release notes.

Author: Peter Geoghegan
Reviewed-By: Heikki Linnakangas, Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
This commit is contained in:
Peter Geoghegan 2019-03-20 10:04:01 -07:00
parent e5adcb789d
commit dd299df818
29 changed files with 1619 additions and 559 deletions

View File

@ -130,9 +130,12 @@ SELECT bt_index_parent_check('bttest_multi_idx', true);
--
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,80000) i;
ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);
-- Delete many entries, and vacuum. This causes page deletions.
DELETE FROM delete_test_table WHERE a > 40000;
VACUUM delete_test_table;
DELETE FROM delete_test_table WHERE a > 10;
-- Delete most entries, and vacuum, deleting internal pages and creating "fast
-- root"
DELETE FROM delete_test_table WHERE a < 79990;
VACUUM delete_test_table;
SELECT bt_index_parent_check('delete_test_table_pkey', true);
bt_index_parent_check

View File

@ -82,9 +82,12 @@ SELECT bt_index_parent_check('bttest_multi_idx', true);
--
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,80000) i;
ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);
-- Delete many entries, and vacuum. This causes page deletions.
DELETE FROM delete_test_table WHERE a > 40000;
VACUUM delete_test_table;
DELETE FROM delete_test_table WHERE a > 10;
-- Delete most entries, and vacuum, deleting internal pages and creating "fast
-- root"
DELETE FROM delete_test_table WHERE a < 79990;
VACUUM delete_test_table;
SELECT bt_index_parent_check('delete_test_table_pkey', true);

View File

@ -46,6 +46,8 @@ PG_MODULE_MAGIC;
* block per level, which is bound by the range of BlockNumber:
*/
#define InvalidBtreeLevel ((uint32) InvalidBlockNumber)
#define BTreeTupleGetNKeyAtts(itup, rel) \
Min(IndexRelationGetNumberOfKeyAttributes(rel), BTreeTupleGetNAtts(itup, rel))
/*
* State associated with verifying a B-Tree index
@ -67,6 +69,8 @@ typedef struct BtreeCheckState
/* B-Tree Index Relation and associated heap relation */
Relation rel;
Relation heaprel;
/* rel is heapkeyspace index? */
bool heapkeyspace;
/* ShareLock held on heap/index, rather than AccessShareLock? */
bool readonly;
/* Also verifying heap has no unindexed tuples? */
@ -123,7 +127,7 @@ static void bt_index_check_internal(Oid indrelid, bool parentcheck,
bool heapallindexed);
static inline void btree_index_checkable(Relation rel);
static void bt_check_every_level(Relation rel, Relation heaprel,
bool readonly, bool heapallindexed);
bool heapkeyspace, bool readonly, bool heapallindexed);
static BtreeLevel bt_check_level_from_leftmost(BtreeCheckState *state,
BtreeLevel level);
static void bt_target_page_check(BtreeCheckState *state);
@ -138,17 +142,22 @@ static IndexTuple bt_normalize_tuple(BtreeCheckState *state,
IndexTuple itup);
static inline bool offset_is_negative_infinity(BTPageOpaque opaque,
OffsetNumber offset);
static inline bool invariant_l_offset(BtreeCheckState *state, BTScanInsert key,
OffsetNumber upperbound);
static inline bool invariant_leq_offset(BtreeCheckState *state,
BTScanInsert key,
OffsetNumber upperbound);
static inline bool invariant_geq_offset(BtreeCheckState *state,
BTScanInsert key,
OffsetNumber lowerbound);
static inline bool invariant_leq_nontarget_offset(BtreeCheckState *state,
BTScanInsert key,
Page nontarget,
OffsetNumber upperbound);
static inline bool invariant_g_offset(BtreeCheckState *state, BTScanInsert key,
OffsetNumber lowerbound);
static inline bool invariant_l_nontarget_offset(BtreeCheckState *state,
BTScanInsert key,
Page nontarget,
OffsetNumber upperbound);
static Page palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum);
static inline BTScanInsert bt_mkscankey_pivotsearch(Relation rel,
IndexTuple itup);
static inline ItemPointer BTreeTupleGetHeapTIDCareful(BtreeCheckState *state,
IndexTuple itup, bool nonpivot);
/*
* bt_index_check(index regclass, heapallindexed boolean)
@ -205,6 +214,7 @@ bt_index_check_internal(Oid indrelid, bool parentcheck, bool heapallindexed)
Oid heapid;
Relation indrel;
Relation heaprel;
bool heapkeyspace;
LOCKMODE lockmode;
if (parentcheck)
@ -255,7 +265,9 @@ bt_index_check_internal(Oid indrelid, bool parentcheck, bool heapallindexed)
btree_index_checkable(indrel);
/* Check index, possibly against table it is an index on */
bt_check_every_level(indrel, heaprel, parentcheck, heapallindexed);
heapkeyspace = _bt_heapkeyspace(indrel);
bt_check_every_level(indrel, heaprel, heapkeyspace, parentcheck,
heapallindexed);
/*
* Release locks early. That's ok here because nothing in the called
@ -325,8 +337,8 @@ btree_index_checkable(Relation rel)
* parent/child check cannot be affected.)
*/
static void
bt_check_every_level(Relation rel, Relation heaprel, bool readonly,
bool heapallindexed)
bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace,
bool readonly, bool heapallindexed)
{
BtreeCheckState *state;
Page metapage;
@ -347,6 +359,7 @@ bt_check_every_level(Relation rel, Relation heaprel, bool readonly,
state = palloc0(sizeof(BtreeCheckState));
state->rel = rel;
state->heaprel = heaprel;
state->heapkeyspace = heapkeyspace;
state->readonly = readonly;
state->heapallindexed = heapallindexed;
@ -807,7 +820,8 @@ bt_target_page_check(BtreeCheckState *state)
* doesn't contain a high key, so nothing to check
*/
if (!P_RIGHTMOST(topaque) &&
!_bt_check_natts(state->rel, state->target, P_HIKEY))
!_bt_check_natts(state->rel, state->heapkeyspace, state->target,
P_HIKEY))
{
ItemId itemid;
IndexTuple itup;
@ -840,6 +854,7 @@ bt_target_page_check(BtreeCheckState *state)
IndexTuple itup;
size_t tupsize;
BTScanInsert skey;
bool lowersizelimit;
CHECK_FOR_INTERRUPTS();
@ -866,7 +881,8 @@ bt_target_page_check(BtreeCheckState *state)
errhint("This could be a torn page problem.")));
/* Check the number of index tuple attributes */
if (!_bt_check_natts(state->rel, state->target, offset))
if (!_bt_check_natts(state->rel, state->heapkeyspace, state->target,
offset))
{
char *itid,
*htid;
@ -907,7 +923,56 @@ bt_target_page_check(BtreeCheckState *state)
continue;
/* Build insertion scankey for current page offset */
skey = _bt_mkscankey(state->rel, itup);
skey = bt_mkscankey_pivotsearch(state->rel, itup);
/*
* Make sure tuple size does not exceed the relevant BTREE_VERSION
* specific limit.
*
* BTREE_VERSION 4 (which introduced heapkeyspace rules) requisitioned
* a small amount of space from BTMaxItemSize() in order to ensure
* that suffix truncation always has enough space to add an explicit
* heap TID back to a tuple -- we pessimistically assume that every
* newly inserted tuple will eventually need to have a heap TID
* appended during a future leaf page split, when the tuple becomes
* the basis of the new high key (pivot tuple) for the leaf page.
*
* Since the reclaimed space is reserved for that purpose, we must not
* enforce the slightly lower limit when the extra space has been used
* as intended. In other words, there is only a cross-version
* difference in the limit on tuple size within leaf pages.
*
* Still, we're particular about the details within BTREE_VERSION 4
* internal pages. Pivot tuples may only use the extra space for its
* designated purpose. Enforce the lower limit for pivot tuples when
* an explicit heap TID isn't actually present. (In all other cases
* suffix truncation is guaranteed to generate a pivot tuple that's no
* larger than the first right tuple provided to it by its caller.)
*/
lowersizelimit = skey->heapkeyspace &&
(P_ISLEAF(topaque) || BTreeTupleGetHeapTID(itup) == NULL);
if (tupsize > (lowersizelimit ? BTMaxItemSize(state->target) :
BTMaxItemSizeNoHeapTid(state->target)))
{
char *itid,
*htid;
itid = psprintf("(%u,%u)", state->targetblock, offset);
htid = psprintf("(%u,%u)",
ItemPointerGetBlockNumberNoCheck(&(itup->t_tid)),
ItemPointerGetOffsetNumberNoCheck(&(itup->t_tid)));
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("index row size %zu exceeds maximum for index \"%s\"",
tupsize, RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=%s points to %s tid=%s page lsn=%X/%X.",
itid,
P_ISLEAF(topaque) ? "heap" : "index",
htid,
(uint32) (state->targetlsn >> 32),
(uint32) state->targetlsn)));
}
/* Fingerprint leaf page tuples (those that point to the heap) */
if (state->heapallindexed && P_ISLEAF(topaque) && !ItemIdIsDead(itemid))
@ -941,9 +1006,35 @@ bt_target_page_check(BtreeCheckState *state)
* grandparents (as well as great-grandparents, and so on). We don't
* go to those lengths because that would be prohibitively expensive,
* and probably not markedly more effective in practice.
*
* On the leaf level, we check that the key is <= the highkey.
* However, on non-leaf levels we check that the key is < the highkey,
* because the high key is "just another separator" rather than a copy
* of some existing key item; we expect it to be unique among all keys
* on the same level. (Suffix truncation will sometimes produce a
* leaf highkey that is an untruncated copy of the lastleft item, but
* never any other item, which necessitates weakening the leaf level
* check to <=.)
*
* Full explanation for why a highkey is never truly a copy of another
* item from the same level on internal levels:
*
* While the new left page's high key is copied from the first offset
* on the right page during an internal page split, that's not the
* full story. In effect, internal pages are split in the middle of
* the firstright tuple, not between the would-be lastleft and
* firstright tuples: the firstright key ends up on the left side as
* left's new highkey, and the firstright downlink ends up on the
* right side as right's new "negative infinity" item. The negative
* infinity tuple is truncated to zero attributes, so we're only left
* with the downlink. In other words, the copying is just an
* implementation detail of splitting in the middle of a (pivot)
* tuple. (See also: "Notes About Data Representation" in the nbtree
* README.)
*/
if (!P_RIGHTMOST(topaque) &&
!invariant_leq_offset(state, skey, P_HIKEY))
!(P_ISLEAF(topaque) ? invariant_leq_offset(state, skey, P_HIKEY) :
invariant_l_offset(state, skey, P_HIKEY)))
{
char *itid,
*htid;
@ -969,11 +1060,10 @@ bt_target_page_check(BtreeCheckState *state)
* * Item order check *
*
* Check that items are stored on page in logical order, by checking
* current item is less than or equal to next item (if any).
* current item is strictly less than next item (if any).
*/
if (OffsetNumberNext(offset) <= max &&
!invariant_leq_offset(state, skey,
OffsetNumberNext(offset)))
!invariant_l_offset(state, skey, OffsetNumberNext(offset)))
{
char *itid,
*htid,
@ -1036,7 +1126,7 @@ bt_target_page_check(BtreeCheckState *state)
rightkey = bt_right_page_check_scankey(state);
if (rightkey &&
!invariant_geq_offset(state, rightkey, max))
!invariant_g_offset(state, rightkey, max))
{
/*
* As explained at length in bt_right_page_check_scankey(),
@ -1214,9 +1304,9 @@ bt_right_page_check_scankey(BtreeCheckState *state)
* continued existence of target block as non-ignorable (not half-dead or
* deleted) implies that target page was not merged into from the right by
* deletion; the key space at or after target never moved left. Target's
* parent either has the same downlink to target as before, or a <=
* parent either has the same downlink to target as before, or a <
* downlink due to deletion at the left of target. Target either has the
* same highkey as before, or a highkey <= before when there is a page
* same highkey as before, or a highkey < before when there is a page
* split. (The rightmost concurrently-split-from-target-page page will
* still have the same highkey as target was originally found to have,
* which for our purposes is equivalent to target's highkey itself never
@ -1305,7 +1395,7 @@ bt_right_page_check_scankey(BtreeCheckState *state)
* memory remaining allocated.
*/
firstitup = (IndexTuple) PageGetItem(rightpage, rightitem);
return _bt_mkscankey(state->rel, firstitup);
return bt_mkscankey_pivotsearch(state->rel, firstitup);
}
/*
@ -1368,7 +1458,8 @@ bt_downlink_check(BtreeCheckState *state, BTScanInsert targetkey,
/*
* Verify child page has the downlink key from target page (its parent) as
* a lower bound.
* a lower bound; downlink must be strictly less than all keys on the
* page.
*
* Check all items, rather than checking just the first and trusting that
* the operator class obeys the transitive law.
@ -1417,14 +1508,29 @@ bt_downlink_check(BtreeCheckState *state, BTScanInsert targetkey,
{
/*
* Skip comparison of target page key against "negative infinity"
* item, if any. Checking it would indicate that it's not an upper
* bound, but that's only because of the hard-coding within
* _bt_compare().
* item, if any. Checking it would indicate that it's not a strict
* lower bound, but that's only because of the hard-coding for
* negative infinity items within _bt_compare().
*
* If nbtree didn't truncate negative infinity tuples during internal
* page splits then we'd expect child's negative infinity key to be
* equal to the scankey/downlink from target/parent (it would be a
* "low key" in this hypothetical scenario, and so it would still need
* to be treated as a special case here).
*
* Negative infinity items can be thought of as a strict lower bound
* that works transitively, with the last non-negative-infinity pivot
* followed during a descent from the root as its "true" strict lower
* bound. Only a small number of negative infinity items are truly
* negative infinity; those that are the first items of leftmost
* internal pages. In more general terms, a negative infinity item is
* only negative infinity with respect to the subtree that the page is
* at the root of.
*/
if (offset_is_negative_infinity(copaque, offset))
continue;
if (!invariant_leq_nontarget_offset(state, targetkey, child, offset))
if (!invariant_l_nontarget_offset(state, targetkey, child, offset))
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("down-link lower bound invariant violated for index \"%s\"",
@ -1856,6 +1962,64 @@ offset_is_negative_infinity(BTPageOpaque opaque, OffsetNumber offset)
return !P_ISLEAF(opaque) && offset == P_FIRSTDATAKEY(opaque);
}
/*
* Does the invariant hold that the key is strictly less than a given upper
* bound offset item?
*
* If this function returns false, convention is that caller throws error due
* to corruption.
*/
static inline bool
invariant_l_offset(BtreeCheckState *state, BTScanInsert key,
OffsetNumber upperbound)
{
int32 cmp;
Assert(key->pivotsearch);
/* pg_upgrade'd indexes may legally have equal sibling tuples */
if (!key->heapkeyspace)
return invariant_leq_offset(state, key, upperbound);
cmp = _bt_compare(state->rel, key, state->target, upperbound);
/*
* _bt_compare() is capable of determining that a scankey with a
* filled-out attribute is greater than pivot tuples where the comparison
* is resolved at a truncated attribute (value of attribute in pivot is
* minus infinity). However, it is not capable of determining that a
* scankey is _less than_ a tuple on the basis of a comparison resolved at
* _scankey_ minus infinity attribute. Complete an extra step to simulate
* having minus infinity values for omitted scankey attribute(s).
*/
if (cmp == 0)
{
BTPageOpaque topaque;
ItemId itemid;
IndexTuple ritup;
int uppnkeyatts;
ItemPointer rheaptid;
bool nonpivot;
itemid = PageGetItemId(state->target, upperbound);
ritup = (IndexTuple) PageGetItem(state->target, itemid);
topaque = (BTPageOpaque) PageGetSpecialPointer(state->target);
nonpivot = P_ISLEAF(topaque) && upperbound >= P_FIRSTDATAKEY(topaque);
/* Get number of keys + heap TID for item to the right */
uppnkeyatts = BTreeTupleGetNKeyAtts(ritup, state->rel);
rheaptid = BTreeTupleGetHeapTIDCareful(state, ritup, nonpivot);
/* Heap TID is tiebreaker key attribute */
if (key->keysz == uppnkeyatts)
return key->scantid == NULL && rheaptid != NULL;
return key->keysz < uppnkeyatts;
}
return cmp < 0;
}
/*
* Does the invariant hold that the key is less than or equal to a given upper
* bound offset item?
@ -1869,48 +2033,97 @@ invariant_leq_offset(BtreeCheckState *state, BTScanInsert key,
{
int32 cmp;
Assert(key->pivotsearch);
cmp = _bt_compare(state->rel, key, state->target, upperbound);
return cmp <= 0;
}
/*
* Does the invariant hold that the key is greater than or equal to a given
* lower bound offset item?
* Does the invariant hold that the key is strictly greater than a given lower
* bound offset item?
*
* If this function returns false, convention is that caller throws error due
* to corruption.
*/
static inline bool
invariant_geq_offset(BtreeCheckState *state, BTScanInsert key,
OffsetNumber lowerbound)
invariant_g_offset(BtreeCheckState *state, BTScanInsert key,
OffsetNumber lowerbound)
{
int32 cmp;
Assert(key->pivotsearch);
cmp = _bt_compare(state->rel, key, state->target, lowerbound);
return cmp >= 0;
/* pg_upgrade'd indexes may legally have equal sibling tuples */
if (!key->heapkeyspace)
return cmp >= 0;
/*
* No need to consider the possibility that scankey has attributes that we
* need to force to be interpreted as negative infinity. _bt_compare() is
* able to determine that scankey is greater than negative infinity. The
* distinction between "==" and "<" isn't interesting here, since
* corruption is indicated either way.
*/
return cmp > 0;
}
/*
* Does the invariant hold that the key is less than or equal to a given upper
* Does the invariant hold that the key is strictly less than a given upper
* bound offset item, with the offset relating to a caller-supplied page that
* is not the current target page? Caller's non-target page is typically a
* child page of the target, checked as part of checking a property of the
* target page (i.e. the key comes from the target).
* is not the current target page?
*
* Caller's non-target page is a child page of the target, checked as part of
* checking a property of the target page (i.e. the key comes from the
* target).
*
* If this function returns false, convention is that caller throws error due
* to corruption.
*/
static inline bool
invariant_leq_nontarget_offset(BtreeCheckState *state, BTScanInsert key,
Page nontarget, OffsetNumber upperbound)
invariant_l_nontarget_offset(BtreeCheckState *state, BTScanInsert key,
Page nontarget, OffsetNumber upperbound)
{
int32 cmp;
Assert(key->pivotsearch);
cmp = _bt_compare(state->rel, key, nontarget, upperbound);
return cmp <= 0;
/* pg_upgrade'd indexes may legally have equal sibling tuples */
if (!key->heapkeyspace)
return cmp <= 0;
/* See invariant_l_offset() for an explanation of this extra step */
if (cmp == 0)
{
ItemId itemid;
IndexTuple child;
int uppnkeyatts;
ItemPointer childheaptid;
BTPageOpaque copaque;
bool nonpivot;
itemid = PageGetItemId(nontarget, upperbound);
child = (IndexTuple) PageGetItem(nontarget, itemid);
copaque = (BTPageOpaque) PageGetSpecialPointer(nontarget);
nonpivot = P_ISLEAF(copaque) && upperbound >= P_FIRSTDATAKEY(copaque);
/* Get number of keys + heap TID for child/non-target item */
uppnkeyatts = BTreeTupleGetNKeyAtts(child, state->rel);
childheaptid = BTreeTupleGetHeapTIDCareful(state, child, nonpivot);
/* Heap TID is tiebreaker key attribute */
if (key->keysz == uppnkeyatts)
return key->scantid == NULL && childheaptid != NULL;
return key->keysz < uppnkeyatts;
}
return cmp < 0;
}
/*
@ -2066,3 +2279,53 @@ palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum)
return page;
}
/*
* _bt_mkscankey() wrapper that automatically prevents insertion scankey from
* being considered greater than the pivot tuple that its values originated
* from (or some other identical pivot tuple) in the common case where there
* are truncated/minus infinity attributes. Without this extra step, there
* are forms of corruption that amcheck could theoretically fail to report.
*
* For example, invariant_g_offset() might miss a cross-page invariant failure
* on an internal level if the scankey built from the first item on the
* target's right sibling page happened to be equal to (not greater than) the
* last item on target page. The !pivotsearch tiebreaker in _bt_compare()
* might otherwise cause amcheck to assume (rather than actually verify) that
* the scankey is greater.
*/
static inline BTScanInsert
bt_mkscankey_pivotsearch(Relation rel, IndexTuple itup)
{
BTScanInsert skey;
skey = _bt_mkscankey(rel, itup);
skey->pivotsearch = true;
return skey;
}
/*
* BTreeTupleGetHeapTID() wrapper that lets caller enforce that a heap TID must
* be present in cases where that is mandatory.
*
* This doesn't add much as of BTREE_VERSION 4, since the INDEX_ALT_TID_MASK
* bit is effectively a proxy for whether or not the tuple is a pivot tuple.
* It may become more useful in the future, when non-pivot tuples support their
* own alternative INDEX_ALT_TID_MASK representation.
*/
static inline ItemPointer
BTreeTupleGetHeapTIDCareful(BtreeCheckState *state, IndexTuple itup,
bool nonpivot)
{
ItemPointer result = BTreeTupleGetHeapTID(itup);
BlockNumber targetblock = state->targetblock;
if (result == NULL && nonpivot)
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("block %u or its right sibling block or child block in index \"%s\" contains non-pivot tuple that lacks a heap TID",
targetblock, RelationGetRelationName(state->rel))));
return result;
}

View File

@ -561,7 +561,7 @@ bt_metap(PG_FUNCTION_ARGS)
* Get values of extended metadata if available, use default values
* otherwise.
*/
if (metad->btm_version == BTREE_VERSION)
if (metad->btm_version >= BTREE_NOVAC_VERSION)
{
values[j++] = psprintf("%u", metad->btm_oldest_btpo_xact);
values[j++] = psprintf("%f", metad->btm_last_cleanup_num_heap_tuples);

View File

@ -5,7 +5,7 @@ CREATE INDEX test1_a_idx ON test1 USING btree (a);
SELECT * FROM bt_metap('test1_a_idx');
-[ RECORD 1 ]-----------+-------
magic | 340322
version | 3
version | 4
root | 1
level | 0
fastroot | 1

View File

@ -48,7 +48,7 @@ select version, tree_level,
from pgstatindex('test_pkey');
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select version, tree_level,
@ -58,7 +58,7 @@ select version, tree_level,
from pgstatindex('test_pkey'::text);
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select version, tree_level,
@ -68,7 +68,7 @@ select version, tree_level,
from pgstatindex('test_pkey'::name);
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select version, tree_level,
@ -78,7 +78,7 @@ select version, tree_level,
from pgstatindex('test_pkey'::regclass);
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select pg_relpages('test');
@ -232,7 +232,7 @@ create index test_partition_hash_idx on test_partition using hash (a);
select pgstatindex('test_partition_idx');
pgstatindex
------------------------------
(3,0,8192,0,0,0,0,0,NaN,NaN)
(4,0,8192,0,0,0,0,0,NaN,NaN)
(1 row)
select pgstathashindex('test_partition_hash_idx');

View File

@ -504,8 +504,9 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor);
<para>
By default, B-tree indexes store their entries in ascending order
with nulls last. This means that a forward scan of an index on
column <literal>x</literal> produces output satisfying <literal>ORDER BY x</literal>
with nulls last (table TID is treated as a tiebreaker column among
otherwise equal entries). This means that a forward scan of an
index on column <literal>x</literal> produces output satisfying <literal>ORDER BY x</literal>
(or more verbosely, <literal>ORDER BY x ASC NULLS LAST</literal>). The
index can also be scanned backward, producing output satisfying
<literal>ORDER BY x DESC</literal>
@ -1162,10 +1163,21 @@ CREATE INDEX tab_x_y ON tab(x, y);
the extra columns are trailing columns; making them be leading columns is
unwise for the reasons explained in <xref linkend="indexes-multicolumn"/>.
However, this method doesn't support the case where you want the index to
enforce uniqueness on the key column(s). Also, explicitly marking
non-searchable columns as <literal>INCLUDE</literal> columns makes the
index slightly smaller, because such columns need not be stored in upper
tree levels.
enforce uniqueness on the key column(s).
</para>
<para>
<firstterm>Suffix truncation</firstterm> always removes non-key
columns from upper B-Tree levels. As payload columns, they are
never used to guide index scans. The truncation process also
removes one or more trailing key column(s) when the remaining
prefix of key column(s) happens to be sufficient to describe tuples
on the lowest B-Tree level. In practice, covering indexes without
an <literal>INCLUDE</literal> clause often avoid storing columns
that are effectively payload in the upper levels. However,
explicitly defining payload columns as non-key columns
<emphasis>reliably</emphasis> keeps the tuples in upper levels
small.
</para>
<para>

View File

@ -536,7 +536,11 @@ index_truncate_tuple(TupleDesc sourceDescriptor, IndexTuple source,
bool isnull[INDEX_MAX_KEYS];
IndexTuple truncated;
Assert(leavenatts < sourceDescriptor->natts);
Assert(leavenatts <= sourceDescriptor->natts);
/* Easy case: no truncation actually required */
if (leavenatts == sourceDescriptor->natts)
return CopyIndexTuple(source);
/* Create temporary descriptor to scribble on */
truncdesc = palloc(TupleDescSize(sourceDescriptor));

View File

@ -28,37 +28,38 @@ right-link to find the new page containing the key range you're looking
for. This might need to be repeated, if the page has been split more than
once.
Lehman and Yao talk about alternating "separator" keys and downlinks in
internal pages rather than tuples or records. We use the term "pivot"
tuple to refer to tuples which don't point to heap tuples, that are used
only for tree navigation. All tuples on non-leaf pages and high keys on
leaf pages are pivot tuples. Since pivot tuples are only used to represent
which part of the key space belongs on each page, they can have attribute
values copied from non-pivot tuples that were deleted and killed by VACUUM
some time ago. A pivot tuple may contain a "separator" key and downlink,
just a separator key (i.e. the downlink value is implicitly undefined), or
just a downlink (i.e. all attributes are truncated away).
The requirement that all btree keys be unique is satisfied by treating heap
TID as a tiebreaker attribute. Logical duplicates are sorted in heap TID
order. This is necessary because Lehman and Yao also require that the key
range for a subtree S is described by Ki < v <= Ki+1 where Ki and Ki+1 are
the adjacent keys in the parent page (Ki must be _strictly_ less than v,
which is assured by having reliably unique keys). Keys are always unique
on their level, with the exception of a leaf page's high key, which can be
fully equal to the last item on the page.
The Postgres implementation of suffix truncation must make sure that the
Lehman and Yao invariants hold, and represents that absent/truncated
attributes in pivot tuples have the sentinel value "minus infinity". The
later section on suffix truncation will be helpful if it's unclear how the
Lehman & Yao invariants work with a real world example.
Differences to the Lehman & Yao algorithm
-----------------------------------------
We have made the following changes in order to incorporate the L&Y algorithm
into Postgres:
The requirement that all btree keys be unique is too onerous,
but the algorithm won't work correctly without it. Fortunately, it is
only necessary that keys be unique on a single tree level, because L&Y
only use the assumption of key uniqueness when re-finding a key in a
parent page (to determine where to insert the key for a split page).
Therefore, we can use the link field to disambiguate multiple
occurrences of the same user key: only one entry in the parent level
will be pointing at the page we had split. (Indeed we need not look at
the real "key" at all, just at the link field.) We can distinguish
items at the leaf level in the same way, by examining their links to
heap tuples; we'd never have two items for the same heap tuple.
Lehman and Yao assume that the key range for a subtree S is described
by Ki < v <= Ki+1 where Ki and Ki+1 are the adjacent keys in the parent
page. This does not work for nonunique keys (for example, if we have
enough equal keys to spread across several leaf pages, there *must* be
some equal bounding keys in the first level up). Therefore we assume
Ki <= v <= Ki+1 instead. A search that finds exact equality to a
bounding key in an upper tree level must descend to the left of that
key to ensure it finds any equal keys in the preceding page. An
insertion that sees the high key of its target page is equal to the key
to be inserted has a choice whether or not to move right, since the new
key could go on either page. (Currently, we try to find a page where
there is room for the new key without a split.)
Lehman and Yao don't require read locks, but assume that in-memory
copies of tree pages are unshared. Postgres shares in-memory buffers
among backends. As a result, we do page-level read locking on btree
@ -194,9 +195,7 @@ be prepared for the possibility that the item it wants is to the left of
the recorded position (but it can't have moved left out of the recorded
page). Since we hold a lock on the lower page (per L&Y) until we have
re-found the parent item that links to it, we can be assured that the
parent item does still exist and can't have been deleted. Also, because
we are matching downlink page numbers and not data keys, we don't have any
problem with possibly misidentifying the parent item.
parent item does still exist and can't have been deleted.
Page Deletion
-------------
@ -615,22 +614,40 @@ scankey is consulted as each index entry is sequentially scanned to decide
whether to return the entry and whether the scan can stop (see
_bt_checkkeys()).
We use term "pivot" index tuples to distinguish tuples which don't point
to heap tuples, but rather used for tree navigation. Pivot tuples includes
all tuples on non-leaf pages and high keys on leaf pages. Note that pivot
index tuples are only used to represent which part of the key space belongs
on each page, and can have attribute values copied from non-pivot tuples
that were deleted and killed by VACUUM some time ago. In principle, we could
truncate away attributes that are not needed for a page high key during a leaf
page split, provided that the remaining attributes distinguish the last index
tuple on the post-split left page as belonging on the left page, and the first
index tuple on the post-split right page as belonging on the right page. This
optimization is sometimes called suffix truncation, and may appear in a future
release. Since the high key is subsequently reused as the downlink in the
parent page for the new right page, suffix truncation can increase index
fan-out considerably by keeping pivot tuples short. INCLUDE indexes similarly
truncate away non-key attributes at the time of a leaf page split,
increasing fan-out.
Notes about suffix truncation
-----------------------------
We truncate away suffix key attributes that are not needed for a page high
key during a leaf page split. The remaining attributes must distinguish
the last index tuple on the post-split left page as belonging on the left
page, and the first index tuple on the post-split right page as belonging
on the right page. Tuples logically retain truncated key attributes,
though they implicitly have "negative infinity" as their value, and have no
storage overhead. Since the high key is subsequently reused as the
downlink in the parent page for the new right page, suffix truncation makes
pivot tuples short. INCLUDE indexes are guaranteed to have non-key
attributes truncated at the time of a leaf page split, but may also have
some key attributes truncated away, based on the usual criteria for key
attributes. They are not a special case, since non-key attributes are
merely payload to B-Tree searches.
The goal of suffix truncation of key attributes is to improve index
fan-out. The technique was first described by Bayer and Unterauer (R.Bayer
and K.Unterauer, Prefix B-Trees, ACM Transactions on Database Systems, Vol
2, No. 1, March 1977, pp 11-26). The Postgres implementation is loosely
based on their paper. Note that Postgres only implements what the paper
refers to as simple prefix B-Trees. Note also that the paper assumes that
the tree has keys that consist of single strings that maintain the "prefix
property", much like strings that are stored in a suffix tree (comparisons
of earlier bytes must always be more significant than comparisons of later
bytes, and, in general, the strings must compare in a way that doesn't
break transitive consistency as they're split into pieces). Suffix
truncation in Postgres currently only works at the whole-attribute
granularity, but it would be straightforward to invent opclass
infrastructure that manufactures a smaller attribute value in the case of
variable-length types, such as text. An opclass support function could
manufacture the shortest possible key value that still correctly separates
each half of a leaf page split.
Notes About Data Representation
-------------------------------
@ -643,20 +660,26 @@ don't need to renumber any existing pages when splitting the root.)
The Postgres disk block data format (an array of items) doesn't fit
Lehman and Yao's alternating-keys-and-pointers notion of a disk page,
so we have to play some games.
so we have to play some games. (The alternating-keys-and-pointers
notion is important for internal page splits, which conceptually split
at the middle of an existing pivot tuple -- the tuple's "separator" key
goes on the left side of the split as the left side's new high key,
while the tuple's pointer/downlink goes on the right side as the
first/minus infinity downlink.)
On a page that is not rightmost in its tree level, the "high key" is
kept in the page's first item, and real data items start at item 2.
The link portion of the "high key" item goes unused. A page that is
rightmost has no "high key", so data items start with the first item.
Putting the high key at the left, rather than the right, may seem odd,
but it avoids moving the high key as we add data items.
rightmost has no "high key" (it's implicitly positive infinity), so
data items start with the first item. Putting the high key at the
left, rather than the right, may seem odd, but it avoids moving the
high key as we add data items.
On a leaf page, the data items are simply links to (TIDs of) tuples
in the relation being indexed, with the associated key values.
On a non-leaf page, the data items are down-links to child pages with
bounding keys. The key in each data item is the *lower* bound for
bounding keys. The key in each data item is a strict lower bound for
keys on that child page, so logically the key is to the left of that
downlink. The high key (if present) is the upper bound for the last
downlink. The first data item on each such page has no lower bound
@ -664,4 +687,5 @@ downlink. The first data item on each such page has no lower bound
routines must treat it accordingly. The actual key stored in the
item is irrelevant, and need not be stored at all. This arrangement
corresponds to the fact that an L&Y non-leaf page has one more pointer
than key.
than key. Suffix truncation's negative infinity attributes behave in
the same way.

View File

@ -61,14 +61,16 @@ static OffsetNumber _bt_findinsertloc(Relation rel,
BTStack stack,
Relation heapRel);
static void _bt_stepright(Relation rel, BTInsertState insertstate, BTStack stack);
static void _bt_insertonpg(Relation rel, Buffer buf, Buffer cbuf,
static void _bt_insertonpg(Relation rel, BTScanInsert itup_key,
Buffer buf,
Buffer cbuf,
BTStack stack,
IndexTuple itup,
OffsetNumber newitemoff,
bool split_only_page);
static Buffer _bt_split(Relation rel, Buffer buf, Buffer cbuf,
OffsetNumber firstright, OffsetNumber newitemoff, Size newitemsz,
IndexTuple newitem, bool newitemonleft);
static Buffer _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf,
Buffer cbuf, OffsetNumber firstright, OffsetNumber newitemoff,
Size newitemsz, IndexTuple newitem, bool newitemonleft);
static void _bt_insert_parent(Relation rel, Buffer buf, Buffer rbuf,
BTStack stack, bool is_root, bool is_only);
static OffsetNumber _bt_findsplitloc(Relation rel, Page page,
@ -116,6 +118,9 @@ _bt_doinsert(Relation rel, IndexTuple itup,
/* we need an insertion scan key to do our search, so build one */
itup_key = _bt_mkscankey(rel, itup);
/* No scantid until uniqueness established in checkingunique case */
if (checkingunique && itup_key->heapkeyspace)
itup_key->scantid = NULL;
/*
* Fill in the BTInsertState working area, to track the current page and
@ -231,12 +236,13 @@ top:
* NOTE: obviously, _bt_check_unique can only detect keys that are already
* in the index; so it cannot defend against concurrent insertions of the
* same key. We protect against that by means of holding a write lock on
* the target page. Any other would-be inserter of the same key must
* acquire a write lock on the same target page, so only one would-be
* inserter can be making the check at one time. Furthermore, once we are
* past the check we hold write locks continuously until we have performed
* our insertion, so no later inserter can fail to see our insertion.
* (This requires some care in _bt_findinsertloc.)
* the first page the value could be on, regardless of the value of its
* implicit heap TID tiebreaker attribute. Any other would-be inserter of
* the same key must acquire a write lock on the same page, so only one
* would-be inserter can be making the check at one time. Furthermore,
* once we are past the check we hold write locks continuously until we
* have performed our insertion, so no later inserter can fail to see our
* insertion. (This requires some care in _bt_findinsertloc.)
*
* If we must wait for another xact, we release the lock while waiting,
* and then must start over completely.
@ -274,6 +280,10 @@ top:
_bt_freestack(stack);
goto top;
}
/* Uniqueness is established -- restore heap tid as scantid */
if (itup_key->heapkeyspace)
itup_key->scantid = &itup->t_tid;
}
if (checkUnique != UNIQUE_CHECK_EXISTING)
@ -282,12 +292,11 @@ top:
/*
* The only conflict predicate locking cares about for indexes is when
* an index tuple insert conflicts with an existing lock. Since the
* actual location of the insert is hard to predict because of the
* random search used to prevent O(N^2) performance when there are
* many duplicate entries, we can just use the "first valid" page.
* This reasoning also applies to INCLUDE indexes, whose extra
* attributes are not considered part of the key space.
* an index tuple insert conflicts with an existing lock. We don't
* know the actual page we're going to insert to yet because scantid
* was not filled in initially, but it's okay to use the "first valid"
* page instead. This reasoning also applies to INCLUDE indexes,
* whose extra attributes are not considered part of the key space.
*/
CheckForSerializableConflictIn(rel, NULL, insertstate.buf);
@ -298,8 +307,8 @@ top:
*/
newitemoff = _bt_findinsertloc(rel, &insertstate, checkingunique,
stack, heapRel);
_bt_insertonpg(rel, insertstate.buf, InvalidBuffer, stack, itup,
newitemoff, false);
_bt_insertonpg(rel, itup_key, insertstate.buf, InvalidBuffer, stack,
itup, newitemoff, false);
}
else
{
@ -371,6 +380,7 @@ _bt_check_unique(Relation rel, BTInsertState insertstate, Relation heapRel,
* Scan over all equal tuples, looking for live conflicts.
*/
Assert(!insertstate->bounds_valid || insertstate->low == offset);
Assert(itup_key->scantid == NULL);
for (;;)
{
ItemId curitemid;
@ -642,18 +652,21 @@ _bt_check_unique(Relation rel, BTInsertState insertstate, Relation heapRel,
/*
* _bt_findinsertloc() -- Finds an insert location for a tuple
*
* On entry, insertstate buffer contains the first legal page the new
* tuple could be inserted to. It is exclusive-locked and pinned by the
* caller.
* On entry, insertstate buffer contains the page the new tuple belongs
* on. It is exclusive-locked and pinned by the caller.
*
* If the new key is equal to one or more existing keys, we can
* legitimately place it anywhere in the series of equal keys --- in fact,
* if the new key is equal to the page's "high key" we can place it on
* the next page. If it is equal to the high key, and there's not room
* to insert the new tuple on the current page without splitting, then
* we can move right hoping to find more free space and avoid a split.
* Furthermore, if there's not enough room on a page, we try to make
* room by removing any LP_DEAD tuples.
* If 'checkingunique' is true, the buffer on entry is the first page
* that contains duplicates of the new key. If there are duplicates on
* multiple pages, the correct insertion position might be some page to
* the right, rather than the first page. In that case, this function
* moves right to the correct target page.
*
* (In a !heapkeyspace index, there can be multiple pages with the same
* high key, where the new tuple could legitimately be placed on. In
* that case, the caller passes the first page containing duplicates,
* just like when checkinunique=true. If that page doesn't have enough
* room for the new tuple, this function moves right, trying to find a
* legal page that does.)
*
* On exit, insertstate buffer contains the chosen insertion page, and
* the offset within that page is returned. If _bt_findinsertloc needed
@ -663,6 +676,9 @@ _bt_check_unique(Relation rel, BTInsertState insertstate, Relation heapRel,
* If insertstate contains cached binary search bounds, we will take
* advantage of them. This avoids repeating comparisons that we made in
* _bt_check_unique() already.
*
* If there is not enough room on the page for the new tuple, we try to
* make room by removing any LP_DEAD tuples.
*/
static OffsetNumber
_bt_findinsertloc(Relation rel,
@ -677,87 +693,144 @@ _bt_findinsertloc(Relation rel,
lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
/*
* Check whether the item can fit on a btree page at all. (Eventually, we
* ought to try to apply TOAST methods if not.) We actually need to be
* able to fit three items on every page, so restrict any one item to 1/3
* the per-page available space. Note that at this point, itemsz doesn't
* include the ItemId.
*
* NOTE: if you change this, see also the similar code in _bt_buildadd().
*/
if (insertstate->itemsz > BTMaxItemSize(page))
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("index row size %zu exceeds maximum %zu for index \"%s\"",
insertstate->itemsz, BTMaxItemSize(page),
RelationGetRelationName(rel)),
errhint("Values larger than 1/3 of a buffer page cannot be indexed.\n"
"Consider a function index of an MD5 hash of the value, "
"or use full text indexing."),
errtableconstraint(heapRel,
RelationGetRelationName(rel))));
/* Check 1/3 of a page restriction */
if (unlikely(insertstate->itemsz > BTMaxItemSize(page)))
_bt_check_third_page(rel, heapRel, itup_key->heapkeyspace, page,
insertstate->itup);
/*----------
* If we will need to split the page to put the item on this page,
* check whether we can put the tuple somewhere to the right,
* instead. Keep scanning right until we
* (a) find a page with enough free space,
* (b) reach the last page where the tuple can legally go, or
* (c) get tired of searching.
* (c) is not flippant; it is important because if there are many
* pages' worth of equal keys, it's better to split one of the early
* pages than to scan all the way to the end of the run of equal keys
* on every insert. We implement "get tired" as a random choice,
* since stopping after scanning a fixed number of pages wouldn't work
* well (we'd never reach the right-hand side of previously split
* pages). Currently the probability of moving right is set at 0.99,
* which may seem too high to change the behavior much, but it does an
* excellent job of preventing O(N^2) behavior with many equal keys.
*----------
*/
Assert(P_ISLEAF(lpageop) && !P_INCOMPLETE_SPLIT(lpageop));
Assert(!insertstate->bounds_valid || checkingunique);
Assert(!itup_key->heapkeyspace || itup_key->scantid != NULL);
Assert(itup_key->heapkeyspace || itup_key->scantid == NULL);
while (PageGetFreeSpace(page) < insertstate->itemsz)
if (itup_key->heapkeyspace)
{
/*
* before considering moving right, see if we can obtain enough space
* by erasing LP_DEAD items
* If we're inserting into a unique index, we may have to walk right
* through leaf pages to find the one leaf page that we must insert on
* to.
*
* This is needed for checkingunique callers because a scantid was not
* used when we called _bt_search(). scantid can only be set after
* _bt_check_unique() has checked for duplicates. The buffer
* initially stored in insertstate->buf has the page where the first
* duplicate key might be found, which isn't always the page that new
* tuple belongs on. The heap TID attribute for new tuple (scantid)
* could force us to insert on a sibling page, though that should be
* very rare in practice.
*/
if (P_HAS_GARBAGE(lpageop))
if (checkingunique)
{
_bt_vacuum_one_page(rel, insertstate->buf, heapRel);
insertstate->bounds_valid = false;
for (;;)
{
/*
* Does the new tuple belong on this page?
*
* The earlier _bt_check_unique() call may well have
* established a strict upper bound on the offset for the new
* item. If it's not the last item of the page (i.e. if there
* is at least one tuple on the page that goes after the tuple
* we're inserting) then we know that the tuple belongs on
* this page. We can skip the high key check.
*/
if (insertstate->bounds_valid &&
insertstate->low <= insertstate->stricthigh &&
insertstate->stricthigh <= PageGetMaxOffsetNumber(page))
break;
if (PageGetFreeSpace(page) >= insertstate->itemsz)
break; /* OK, now we have enough space */
/* Test '<=', not '!=', since scantid is set now */
if (P_RIGHTMOST(lpageop) ||
_bt_compare(rel, itup_key, page, P_HIKEY) <= 0)
break;
_bt_stepright(rel, insertstate, stack);
/* Update local state after stepping right */
page = BufferGetPage(insertstate->buf);
lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
}
}
/*
* Nope, so check conditions (b) and (c) enumerated above
*
* The earlier _bt_check_unique() call may well have established a
* strict upper bound on the offset for the new item. If it's not the
* last item of the page (i.e. if there is at least one tuple on the
* page that's greater than the tuple we're inserting to) then we know
* that the tuple belongs on this page. We can skip the high key
* check.
* If the target page is full, see if we can obtain enough space by
* erasing LP_DEAD items
*/
if (insertstate->bounds_valid &&
insertstate->low <= insertstate->stricthigh &&
insertstate->stricthigh <= PageGetMaxOffsetNumber(page))
break;
if (PageGetFreeSpace(page) < insertstate->itemsz &&
P_HAS_GARBAGE(lpageop))
{
_bt_vacuum_one_page(rel, insertstate->buf, heapRel);
insertstate->bounds_valid = false;
}
}
else
{
/*----------
* This is a !heapkeyspace (version 2 or 3) index. The current page
* is the first page that we could insert the new tuple to, but there
* may be other pages to the right that we could opt to use instead.
*
* If the new key is equal to one or more existing keys, we can
* legitimately place it anywhere in the series of equal keys. In
* fact, if the new key is equal to the page's "high key" we can place
* it on the next page. If it is equal to the high key, and there's
* not room to insert the new tuple on the current page without
* splitting, then we move right hoping to find more free space and
* avoid a split.
*
* Keep scanning right until we
* (a) find a page with enough free space,
* (b) reach the last page where the tuple can legally go, or
* (c) get tired of searching.
* (c) is not flippant; it is important because if there are many
* pages' worth of equal keys, it's better to split one of the early
* pages than to scan all the way to the end of the run of equal keys
* on every insert. We implement "get tired" as a random choice,
* since stopping after scanning a fixed number of pages wouldn't work
* well (we'd never reach the right-hand side of previously split
* pages). The probability of moving right is set at 0.99, which may
* seem too high to change the behavior much, but it does an excellent
* job of preventing O(N^2) behavior with many equal keys.
*----------
*/
while (PageGetFreeSpace(page) < insertstate->itemsz)
{
/*
* Before considering moving right, see if we can obtain enough
* space by erasing LP_DEAD items
*/
if (P_HAS_GARBAGE(lpageop))
{
_bt_vacuum_one_page(rel, insertstate->buf, heapRel);
insertstate->bounds_valid = false;
if (P_RIGHTMOST(lpageop) ||
_bt_compare(rel, itup_key, page, P_HIKEY) != 0 ||
random() <= (MAX_RANDOM_VALUE / 100))
break;
if (PageGetFreeSpace(page) >= insertstate->itemsz)
break; /* OK, now we have enough space */
}
_bt_stepright(rel, insertstate, stack);
/* Update local state after stepping right */
page = BufferGetPage(insertstate->buf);
lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
/*
* Nope, so check conditions (b) and (c) enumerated above
*
* The earlier _bt_check_unique() call may well have established a
* strict upper bound on the offset for the new item. If it's not
* the last item of the page (i.e. if there is at least one tuple
* on the page that's greater than the tuple we're inserting to)
* then we know that the tuple belongs on this page. We can skip
* the high key check.
*/
if (insertstate->bounds_valid &&
insertstate->low <= insertstate->stricthigh &&
insertstate->stricthigh <= PageGetMaxOffsetNumber(page))
break;
if (P_RIGHTMOST(lpageop) ||
_bt_compare(rel, itup_key, page, P_HIKEY) != 0 ||
random() <= (MAX_RANDOM_VALUE / 100))
break;
_bt_stepright(rel, insertstate, stack);
/* Update local state after stepping right */
page = BufferGetPage(insertstate->buf);
lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
}
}
/*
@ -778,6 +851,9 @@ _bt_findinsertloc(Relation rel,
* else someone else's _bt_check_unique scan could fail to see our insertion.
* Write locks on intermediate dead pages won't do because we don't know when
* they will get de-linked from the tree.
*
* This is more aggressive than it needs to be for non-unique !heapkeyspace
* indexes.
*/
static void
_bt_stepright(Relation rel, BTInsertState insertstate, BTStack stack)
@ -830,8 +906,9 @@ _bt_stepright(Relation rel, BTInsertState insertstate, BTStack stack)
*
* This recursive procedure does the following things:
*
* + if necessary, splits the target page (making sure that the
* split is equitable as far as post-insert free space goes).
* + if necessary, splits the target page, using 'itup_key' for
* suffix truncation on leaf pages (caller passes NULL for
* non-leaf pages).
* + inserts the tuple.
* + if the page was split, pops the parent stack, and finds the
* right place to insert the new child pointer (by walking
@ -857,6 +934,7 @@ _bt_stepright(Relation rel, BTInsertState insertstate, BTStack stack)
*/
static void
_bt_insertonpg(Relation rel,
BTScanInsert itup_key,
Buffer buf,
Buffer cbuf,
BTStack stack,
@ -879,7 +957,7 @@ _bt_insertonpg(Relation rel,
BTreeTupleGetNAtts(itup, rel) ==
IndexRelationGetNumberOfAttributes(rel));
Assert(P_ISLEAF(lpageop) ||
BTreeTupleGetNAtts(itup, rel) ==
BTreeTupleGetNAtts(itup, rel) <=
IndexRelationGetNumberOfKeyAttributes(rel));
/* The caller should've finished any incomplete splits already. */
@ -929,8 +1007,8 @@ _bt_insertonpg(Relation rel,
&newitemonleft);
/* split the buffer into left and right halves */
rbuf = _bt_split(rel, buf, cbuf, firstright,
newitemoff, itemsz, itup, newitemonleft);
rbuf = _bt_split(rel, itup_key, buf, cbuf, firstright, newitemoff,
itemsz, itup, newitemonleft);
PredicateLockPageSplit(rel,
BufferGetBlockNumber(buf),
BufferGetBlockNumber(rbuf));
@ -1014,7 +1092,7 @@ _bt_insertonpg(Relation rel,
if (BufferIsValid(metabuf))
{
/* upgrade meta-page if needed */
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
metad->btm_fastroot = itup_blkno;
metad->btm_fastlevel = lpageop->btpo.level;
@ -1069,6 +1147,8 @@ _bt_insertonpg(Relation rel,
if (BufferIsValid(metabuf))
{
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
xlmeta.version = metad->btm_version;
xlmeta.root = metad->btm_root;
xlmeta.level = metad->btm_level;
xlmeta.fastroot = metad->btm_fastroot;
@ -1136,17 +1216,19 @@ _bt_insertonpg(Relation rel,
* new right page. newitemoff etc. tell us about the new item that
* must be inserted along with the data from the old page.
*
* When splitting a non-leaf page, 'cbuf' is the left-sibling of the
* page we're inserting the downlink for. This function will clear the
* INCOMPLETE_SPLIT flag on it, and release the buffer.
* itup_key is used for suffix truncation on leaf pages (internal
* page callers pass NULL). When splitting a non-leaf page, 'cbuf'
* is the left-sibling of the page we're inserting the downlink for.
* This function will clear the INCOMPLETE_SPLIT flag on it, and
* release the buffer.
*
* Returns the new right sibling of buf, pinned and write-locked.
* The pin and lock on buf are maintained.
*/
static Buffer
_bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
OffsetNumber newitemoff, Size newitemsz, IndexTuple newitem,
bool newitemonleft)
_bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf,
OffsetNumber firstright, OffsetNumber newitemoff, Size newitemsz,
IndexTuple newitem, bool newitemonleft)
{
Buffer rbuf;
Page origpage;
@ -1240,7 +1322,8 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
itemid = PageGetItemId(origpage, P_HIKEY);
itemsz = ItemIdGetLength(itemid);
item = (IndexTuple) PageGetItem(origpage, itemid);
Assert(BTreeTupleGetNAtts(item, rel) == indnkeyatts);
Assert(BTreeTupleGetNAtts(item, rel) > 0);
Assert(BTreeTupleGetNAtts(item, rel) <= indnkeyatts);
if (PageAddItem(rightpage, (Item) item, itemsz, rightoff,
false, false) == InvalidOffsetNumber)
{
@ -1254,8 +1337,29 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
/*
* The "high key" for the new left page will be the first key that's going
* to go into the new right page. This might be either the existing data
* item at position firstright, or the incoming tuple.
* to go into the new right page, or possibly a truncated version if this
* is a leaf page split. This might be either the existing data item at
* position firstright, or the incoming tuple.
*
* The high key for the left page is formed using the first item on the
* right page, which may seem to be contrary to Lehman & Yao's approach of
* using the left page's last item as its new high key when splitting on
* the leaf level. It isn't, though: suffix truncation will leave the
* left page's high key fully equal to the last item on the left page when
* two tuples with equal key values (excluding heap TID) enclose the split
* point. It isn't actually necessary for a new leaf high key to be equal
* to the last item on the left for the L&Y "subtree" invariant to hold.
* It's sufficient to make sure that the new leaf high key is strictly
* less than the first item on the right leaf page, and greater than or
* equal to (not necessarily equal to) the last item on the left leaf
* page.
*
* In other words, when suffix truncation isn't possible, L&Y's exact
* approach to leaf splits is taken. (Actually, even that is slightly
* inaccurate. A tuple with all the keys from firstright but the heap TID
* from lastleft will be used as the new high key, since the last left
* tuple could be physically larger despite being opclass-equal in respect
* of all attributes prior to the heap TID attribute.)
*/
leftoff = P_HIKEY;
if (!newitemonleft && newitemoff == firstright)
@ -1273,25 +1377,48 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
}
/*
* Truncate non-key (INCLUDE) attributes of the high key item before
* inserting it on the left page. This only needs to happen at the leaf
* Truncate unneeded key and non-key attributes of the high key item
* before inserting it on the left page. This can only happen at the leaf
* level, since in general all pivot tuple values originate from leaf
* level high keys. This isn't just about avoiding unnecessary work,
* though; truncating unneeded key attributes (more aggressive suffix
* truncation) can only be performed at the leaf level anyway. This is
* because a pivot tuple in a grandparent page must guide a search not
* only to the correct parent page, but also to the correct leaf page.
* level high keys. A pivot tuple in a grandparent page must guide a
* search not only to the correct parent page, but also to the correct
* leaf page.
*/
if (indnatts != indnkeyatts && isleaf)
if (isleaf && (itup_key->heapkeyspace || indnatts != indnkeyatts))
{
lefthikey = _bt_nonkey_truncate(rel, item);
IndexTuple lastleft;
/*
* Determine which tuple will become the last on the left page. This
* is needed to decide how many attributes from the first item on the
* right page must remain in new high key for left page.
*/
if (newitemonleft && newitemoff == firstright)
{
/* incoming tuple will become last on left page */
lastleft = newitem;
}
else
{
OffsetNumber lastleftoff;
/* item just before firstright will become last on left page */
lastleftoff = OffsetNumberPrev(firstright);
Assert(lastleftoff >= P_FIRSTDATAKEY(oopaque));
itemid = PageGetItemId(origpage, lastleftoff);
lastleft = (IndexTuple) PageGetItem(origpage, itemid);
}
Assert(lastleft != item);
lefthikey = _bt_truncate(rel, lastleft, item, itup_key);
itemsz = IndexTupleSize(lefthikey);
itemsz = MAXALIGN(itemsz);
}
else
lefthikey = item;
Assert(BTreeTupleGetNAtts(lefthikey, rel) == indnkeyatts);
Assert(BTreeTupleGetNAtts(lefthikey, rel) > 0);
Assert(BTreeTupleGetNAtts(lefthikey, rel) <= indnkeyatts);
if (PageAddItem(leftpage, (Item) lefthikey, itemsz, leftoff,
false, false) == InvalidOffsetNumber)
{
@ -1484,7 +1611,6 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
xl_btree_split xlrec;
uint8 xlinfo;
XLogRecPtr recptr;
bool loglhikey = false;
xlrec.level = ropaque->btpo.level;
xlrec.firstright = firstright;
@ -1513,22 +1639,10 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
if (newitemonleft)
XLogRegisterBufData(0, (char *) newitem, MAXALIGN(newitemsz));
/* Log left page */
if (!isleaf || indnatts != indnkeyatts)
{
/*
* We must also log the left page's high key. There are two
* reasons for that: right page's leftmost key is suppressed on
* non-leaf levels and in covering indexes included columns are
* truncated from high keys. Show it as belonging to the left
* page buffer, so that it is not stored if XLogInsert decides it
* needs a full-page image of the left page.
*/
itemid = PageGetItemId(origpage, P_HIKEY);
item = (IndexTuple) PageGetItem(origpage, itemid);
XLogRegisterBufData(0, (char *) item, MAXALIGN(IndexTupleSize(item)));
loglhikey = true;
}
/* Log the left page's new high key */
itemid = PageGetItemId(origpage, P_HIKEY);
item = (IndexTuple) PageGetItem(origpage, itemid);
XLogRegisterBufData(0, (char *) item, MAXALIGN(IndexTupleSize(item)));
/*
* Log the contents of the right page in the format understood by
@ -1544,9 +1658,7 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
(char *) rightpage + ((PageHeader) rightpage)->pd_upper,
((PageHeader) rightpage)->pd_special - ((PageHeader) rightpage)->pd_upper);
xlinfo = newitemonleft ?
(loglhikey ? XLOG_BTREE_SPLIT_L_HIGHKEY : XLOG_BTREE_SPLIT_L) :
(loglhikey ? XLOG_BTREE_SPLIT_R_HIGHKEY : XLOG_BTREE_SPLIT_R);
xlinfo = newitemonleft ? XLOG_BTREE_SPLIT_L : XLOG_BTREE_SPLIT_R;
recptr = XLogInsert(RM_BTREE_ID, xlinfo);
PageSetLSN(origpage, recptr);
@ -1909,7 +2021,7 @@ _bt_insert_parent(Relation rel,
_bt_relbuf(rel, pbuf);
}
/* get high key from left page == lower bound for new right page */
/* get high key from left, a strict lower bound for new right page */
ritem = (IndexTuple) PageGetItem(page,
PageGetItemId(page, P_HIKEY));
@ -1939,7 +2051,7 @@ _bt_insert_parent(Relation rel,
RelationGetRelationName(rel), bknum, rbknum);
/* Recursively update the parent */
_bt_insertonpg(rel, pbuf, buf, stack->bts_parent,
_bt_insertonpg(rel, NULL, pbuf, buf, stack->bts_parent,
new_item, stack->bts_offset + 1,
is_only);
@ -2200,7 +2312,7 @@ _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf)
START_CRIT_SECTION();
/* upgrade metapage if needed */
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
/* set btree special data */
@ -2235,7 +2347,8 @@ _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf)
/*
* insert the right page pointer into the new root page.
*/
Assert(BTreeTupleGetNAtts(right_item, rel) ==
Assert(BTreeTupleGetNAtts(right_item, rel) > 0);
Assert(BTreeTupleGetNAtts(right_item, rel) <=
IndexRelationGetNumberOfKeyAttributes(rel));
if (PageAddItem(rootpage, (Item) right_item, right_item_sz, P_FIRSTKEY,
false, false) == InvalidOffsetNumber)
@ -2268,6 +2381,8 @@ _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf)
XLogRegisterBuffer(1, lbuf, REGBUF_STANDARD);
XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
md.version = metad->btm_version;
md.root = rootblknum;
md.level = metad->btm_level;
md.fastroot = rootblknum;
@ -2332,6 +2447,7 @@ _bt_pgaddtup(Page page,
{
trunctuple = *itup;
trunctuple.t_info = sizeof(IndexTupleData);
/* Deliberately zero INDEX_ALT_TID_MASK bits */
BTreeTupleSetNAtts(&trunctuple, 0);
itup = &trunctuple;
itemsize = sizeof(IndexTupleData);
@ -2347,8 +2463,8 @@ _bt_pgaddtup(Page page,
/*
* _bt_isequal - used in _bt_doinsert in check for duplicates.
*
* This is very similar to _bt_compare, except for NULL handling.
* Rule is simple: NOT_NULL not equal NULL, NULL not equal NULL too.
* This is very similar to _bt_compare, except for NULL and negative infinity
* handling. Rule is simple: NOT_NULL not equal NULL, NULL not equal NULL too.
*/
static bool
_bt_isequal(TupleDesc itupdesc, BTScanInsert itup_key, Page page,
@ -2361,6 +2477,7 @@ _bt_isequal(TupleDesc itupdesc, BTScanInsert itup_key, Page page,
/* Better be comparing to a non-pivot item */
Assert(P_ISLEAF((BTPageOpaque) PageGetSpecialPointer(page)));
Assert(offnum >= P_FIRSTDATAKEY((BTPageOpaque) PageGetSpecialPointer(page)));
Assert(itup_key->scantid == NULL);
scankey = itup_key->scankeys;
itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));

View File

@ -33,7 +33,8 @@
#include "storage/predicate.h"
#include "utils/snapmgr.h"
static void _bt_cachemetadata(Relation rel, BTMetaPageData *metad);
static void _bt_cachemetadata(Relation rel, BTMetaPageData *input);
static BTMetaPageData *_bt_getmeta(Relation rel, Buffer metabuf);
static bool _bt_mark_page_halfdead(Relation rel, Buffer buf, BTStack stack);
static bool _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf,
bool *rightsib_empty);
@ -77,7 +78,9 @@ _bt_initmetapage(Page page, BlockNumber rootbknum, uint32 level)
}
/*
* _bt_upgrademetapage() -- Upgrade a meta-page from an old format to the new.
* _bt_upgrademetapage() -- Upgrade a meta-page from an old format to version
* 3, the last version that can be updated without broadly affecting
* on-disk compatibility. (A REINDEX is required to upgrade to v4.)
*
* This routine does purely in-memory image upgrade. Caller is
* responsible for locking, WAL-logging etc.
@ -93,11 +96,11 @@ _bt_upgrademetapage(Page page)
/* It must be really a meta page of upgradable version */
Assert(metaopaque->btpo_flags & BTP_META);
Assert(metad->btm_version < BTREE_VERSION);
Assert(metad->btm_version < BTREE_NOVAC_VERSION);
Assert(metad->btm_version >= BTREE_MIN_VERSION);
/* Set version number and fill extra fields added into version 3 */
metad->btm_version = BTREE_VERSION;
metad->btm_version = BTREE_NOVAC_VERSION;
metad->btm_oldest_btpo_xact = InvalidTransactionId;
metad->btm_last_cleanup_num_heap_tuples = -1.0;
@ -107,43 +110,79 @@ _bt_upgrademetapage(Page page)
}
/*
* Cache metadata from meta page to rel->rd_amcache.
* Cache metadata from input meta page to rel->rd_amcache.
*/
static void
_bt_cachemetadata(Relation rel, BTMetaPageData *metad)
_bt_cachemetadata(Relation rel, BTMetaPageData *input)
{
BTMetaPageData *cached_metad;
/* We assume rel->rd_amcache was already freed by caller */
Assert(rel->rd_amcache == NULL);
rel->rd_amcache = MemoryContextAlloc(rel->rd_indexcxt,
sizeof(BTMetaPageData));
/*
* Meta page should be of supported version (should be already checked by
* caller).
*/
Assert(metad->btm_version >= BTREE_MIN_VERSION &&
metad->btm_version <= BTREE_VERSION);
/* Meta page should be of supported version */
Assert(input->btm_version >= BTREE_MIN_VERSION &&
input->btm_version <= BTREE_VERSION);
if (metad->btm_version == BTREE_VERSION)
cached_metad = (BTMetaPageData *) rel->rd_amcache;
if (input->btm_version >= BTREE_NOVAC_VERSION)
{
/* Last version of meta-data, no need to upgrade */
memcpy(rel->rd_amcache, metad, sizeof(BTMetaPageData));
/* Version with compatible meta-data, no need to upgrade */
memcpy(cached_metad, input, sizeof(BTMetaPageData));
}
else
{
BTMetaPageData *cached_metad = (BTMetaPageData *) rel->rd_amcache;
/*
* Upgrade meta-data: copy available information from meta-page and
* fill new fields with default values.
*
* Note that we cannot upgrade to version 4+ without a REINDEX, since
* extensive on-disk changes are required.
*/
memcpy(rel->rd_amcache, metad, offsetof(BTMetaPageData, btm_oldest_btpo_xact));
cached_metad->btm_version = BTREE_VERSION;
memcpy(cached_metad, input, offsetof(BTMetaPageData, btm_oldest_btpo_xact));
cached_metad->btm_version = BTREE_NOVAC_VERSION;
cached_metad->btm_oldest_btpo_xact = InvalidTransactionId;
cached_metad->btm_last_cleanup_num_heap_tuples = -1.0;
}
}
/*
* Get metadata from share-locked buffer containing metapage, while performing
* standard sanity checks. Sanity checks here must match _bt_getroot().
*/
static BTMetaPageData *
_bt_getmeta(Relation rel, Buffer metabuf)
{
Page metapg;
BTPageOpaque metaopaque;
BTMetaPageData *metad;
metapg = BufferGetPage(metabuf);
metaopaque = (BTPageOpaque) PageGetSpecialPointer(metapg);
metad = BTPageGetMeta(metapg);
/* sanity-check the metapage */
if (!P_ISMETA(metaopaque) ||
metad->btm_magic != BTREE_MAGIC)
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("index \"%s\" is not a btree",
RelationGetRelationName(rel))));
if (metad->btm_version < BTREE_MIN_VERSION ||
metad->btm_version > BTREE_VERSION)
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("version mismatch in index \"%s\": file version %d, "
"current version %d, minimal supported version %d",
RelationGetRelationName(rel),
metad->btm_version, BTREE_VERSION, BTREE_MIN_VERSION)));
return metad;
}
/*
* _bt_update_meta_cleanup_info() -- Update cleanup-related information in
* the metapage.
@ -167,7 +206,7 @@ _bt_update_meta_cleanup_info(Relation rel, TransactionId oldestBtpoXact,
metad = BTPageGetMeta(metapg);
/* outdated version of metapage always needs rewrite */
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
needsRewrite = true;
else if (metad->btm_oldest_btpo_xact != oldestBtpoXact ||
metad->btm_last_cleanup_num_heap_tuples != numHeapTuples)
@ -186,7 +225,7 @@ _bt_update_meta_cleanup_info(Relation rel, TransactionId oldestBtpoXact,
START_CRIT_SECTION();
/* upgrade meta-page if needed */
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
/* update cleanup-related information */
@ -202,6 +241,8 @@ _bt_update_meta_cleanup_info(Relation rel, TransactionId oldestBtpoXact,
XLogBeginInsert();
XLogRegisterBuffer(0, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
md.version = metad->btm_version;
md.root = metad->btm_root;
md.level = metad->btm_level;
md.fastroot = metad->btm_fastroot;
@ -376,7 +417,7 @@ _bt_getroot(Relation rel, int access)
START_CRIT_SECTION();
/* upgrade metapage if needed */
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
metad->btm_root = rootblkno;
@ -400,6 +441,8 @@ _bt_getroot(Relation rel, int access)
XLogRegisterBuffer(0, rootbuf, REGBUF_WILL_INIT);
XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
md.version = metad->btm_version;
md.root = rootblkno;
md.level = 0;
md.fastroot = rootblkno;
@ -595,37 +638,12 @@ _bt_getrootheight(Relation rel)
{
BTMetaPageData *metad;
/*
* We can get what we need from the cached metapage data. If it's not
* cached yet, load it. Sanity checks here must match _bt_getroot().
*/
if (rel->rd_amcache == NULL)
{
Buffer metabuf;
Page metapg;
BTPageOpaque metaopaque;
metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);
metapg = BufferGetPage(metabuf);
metaopaque = (BTPageOpaque) PageGetSpecialPointer(metapg);
metad = BTPageGetMeta(metapg);
/* sanity-check the metapage */
if (!P_ISMETA(metaopaque) ||
metad->btm_magic != BTREE_MAGIC)
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("index \"%s\" is not a btree",
RelationGetRelationName(rel))));
if (metad->btm_version < BTREE_MIN_VERSION ||
metad->btm_version > BTREE_VERSION)
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("version mismatch in index \"%s\": file version %d, "
"current version %d, minimal supported version %d",
RelationGetRelationName(rel),
metad->btm_version, BTREE_VERSION, BTREE_MIN_VERSION)));
metad = _bt_getmeta(rel, metabuf);
/*
* If there's no root page yet, _bt_getroot() doesn't expect a cache
@ -642,19 +660,70 @@ _bt_getrootheight(Relation rel)
* Cache the metapage data for next time
*/
_bt_cachemetadata(rel, metad);
/* We shouldn't have cached it if any of these fail */
Assert(metad->btm_magic == BTREE_MAGIC);
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
Assert(metad->btm_fastroot != P_NONE);
_bt_relbuf(rel, metabuf);
}
/* Get cached page */
metad = (BTMetaPageData *) rel->rd_amcache;
/* We shouldn't have cached it if any of these fail */
Assert(metad->btm_magic == BTREE_MAGIC);
Assert(metad->btm_version == BTREE_VERSION);
Assert(metad->btm_fastroot != P_NONE);
return metad->btm_fastlevel;
}
/*
* _bt_heapkeyspace() -- is heap TID being treated as a key?
*
* This is used to determine the rules that must be used to descend a
* btree. Version 4 indexes treat heap TID as a tiebreaker attribute.
* pg_upgrade'd version 3 indexes need extra steps to preserve reasonable
* performance when inserting a new BTScanInsert-wise duplicate tuple
* among many leaf pages already full of such duplicates.
*/
bool
_bt_heapkeyspace(Relation rel)
{
BTMetaPageData *metad;
if (rel->rd_amcache == NULL)
{
Buffer metabuf;
metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);
metad = _bt_getmeta(rel, metabuf);
/*
* If there's no root page yet, _bt_getroot() doesn't expect a cache
* to be made, so just stop here. (XXX perhaps _bt_getroot() should
* be changed to allow this case.)
*/
if (metad->btm_root == P_NONE)
{
uint32 btm_version = metad->btm_version;
_bt_relbuf(rel, metabuf);
return btm_version > BTREE_NOVAC_VERSION;
}
/*
* Cache the metapage data for next time
*/
_bt_cachemetadata(rel, metad);
/* We shouldn't have cached it if any of these fail */
Assert(metad->btm_magic == BTREE_MAGIC);
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
Assert(metad->btm_fastroot != P_NONE);
_bt_relbuf(rel, metabuf);
}
/* Get cached page */
metad = (BTMetaPageData *) rel->rd_amcache;
return metad->btm_version > BTREE_NOVAC_VERSION;
}
/*
* _bt_checkpage() -- Verify that a freshly-read page looks sane.
*/
@ -1123,11 +1192,12 @@ _bt_is_page_halfdead(Relation rel, BlockNumber blk)
* right sibling.
*
* "child" is the leaf page we wish to delete, and "stack" is a search stack
* leading to it (approximately). Note that we will update the stack
* entry(s) to reflect current downlink positions --- this is essentially the
* same as the corresponding step of splitting, and is not expected to affect
* caller. The caller should initialize *target and *rightsib to the leaf
* page and its right sibling.
* leading to it (it actually leads to the leftmost leaf page with a high key
* matching that of the page to be deleted in !heapkeyspace indexes). Note
* that we will update the stack entry(s) to reflect current downlink
* positions --- this is essentially the same as the corresponding step of
* splitting, and is not expected to affect caller. The caller should
* initialize *target and *rightsib to the leaf page and its right sibling.
*
* Note: it's OK to release page locks on any internal pages between the leaf
* and *topparent, because a safe deletion can't become unsafe due to
@ -1149,8 +1219,10 @@ _bt_lock_branch_parent(Relation rel, BlockNumber child, BTStack stack,
BlockNumber leftsib;
/*
* Locate the downlink of "child" in the parent (updating the stack entry
* if needed)
* Locate the downlink of "child" in the parent, updating the stack entry
* if needed. This is how !heapkeyspace indexes deal with having
* non-unique high keys in leaf level pages. Even heapkeyspace indexes
* can have a stale stack due to insertions into the parent.
*/
stack->bts_btentry = child;
pbuf = _bt_getstackbuf(rel, stack);
@ -1362,9 +1434,10 @@ _bt_pagedel(Relation rel, Buffer buf)
{
/*
* We need an approximate pointer to the page's parent page. We
* use the standard search mechanism to search for the page's high
* key; this will give us a link to either the current parent or
* someplace to its left (if there are multiple equal high keys).
* use a variant of the standard search mechanism to search for
* the page's high key; this will give us a link to either the
* current parent or someplace to its left (if there are multiple
* equal high keys, which is possible with !heapkeyspace indexes).
*
* Also check if this is the right-half of an incomplete split
* (see comment above).
@ -1422,7 +1495,8 @@ _bt_pagedel(Relation rel, Buffer buf)
/* we need an insertion scan key for the search, so build one */
itup_key = _bt_mkscankey(rel, targetkey);
/* get stack to leaf page by searching index */
/* find the leftmost leaf page with matching pivot/high key */
itup_key->pivotsearch = true;
stack = _bt_search(rel, itup_key, &lbuf, BT_READ, NULL);
/* don't need a lock or second pin on the page */
_bt_relbuf(rel, lbuf);
@ -1969,7 +2043,7 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
if (BufferIsValid(metabuf))
{
/* upgrade metapage if needed */
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
metad->btm_fastroot = rightsib;
metad->btm_fastlevel = targetlevel;
@ -2017,6 +2091,8 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
{
XLogRegisterBuffer(4, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
xlmeta.version = metad->btm_version;
xlmeta.root = metad->btm_root;
xlmeta.level = metad->btm_level;
xlmeta.fastroot = metad->btm_fastroot;

View File

@ -794,7 +794,7 @@ _bt_vacuum_needs_cleanup(IndexVacuumInfo *info)
metapg = BufferGetPage(metabuf);
metad = BTPageGetMeta(metapg);
if (metad->btm_version < BTREE_VERSION)
if (metad->btm_version < BTREE_NOVAC_VERSION)
{
/*
* Do cleanup if metapage needs upgrade, because we don't have

View File

@ -152,8 +152,12 @@ _bt_search(Relation rel, BTScanInsert key, Buffer *bufP, int access,
* downlink (block) to uniquely identify the index entry, in case it
* moves right while we're working lower in the tree. See the paper
* by Lehman and Yao for how this is detected and handled. (We use the
* child link to disambiguate duplicate keys in the index -- Lehman
* and Yao disallow duplicate keys.)
* child link during the second half of a page split -- if caller ends
* up splitting the child it usually ends up inserting a new pivot
* tuple for child's new right sibling immediately after the original
* bts_offset offset recorded here. The downlink block will be needed
* to check if bts_offset remains the position of this same pivot
* tuple.)
*/
new_stack = (BTStack) palloc(sizeof(BTStackData));
new_stack->bts_blkno = par_blkno;
@ -251,11 +255,13 @@ _bt_moveright(Relation rel,
/*
* When nextkey = false (normal case): if the scan key that brought us to
* this page is > the high key stored on the page, then the page has split
* and we need to move right. (If the scan key is equal to the high key,
* we might or might not need to move right; have to scan the page first
* anyway.)
* and we need to move right. (pg_upgrade'd !heapkeyspace indexes could
* have some duplicates to the right as well as the left, but that's
* something that's only ever dealt with on the leaf level, after
* _bt_search has found an initial leaf page.)
*
* When nextkey = true: move right if the scan key is >= page's high key.
* (Note that key.scantid cannot be set in this case.)
*
* The page could even have split more than once, so scan as far as
* needed.
@ -347,6 +353,9 @@ _bt_binsrch(Relation rel,
int32 result,
cmpval;
/* Requesting nextkey semantics while using scantid seems nonsensical */
Assert(!key->nextkey || key->scantid == NULL);
page = BufferGetPage(buf);
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
@ -554,10 +563,14 @@ _bt_compare(Relation rel,
TupleDesc itupdesc = RelationGetDescr(rel);
BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
IndexTuple itup;
ItemPointer heapTid;
ScanKey scankey;
int ncmpkey;
int ntupatts;
Assert(_bt_check_natts(rel, page, offnum));
Assert(_bt_check_natts(rel, key->heapkeyspace, page, offnum));
Assert(key->keysz <= IndexRelationGetNumberOfKeyAttributes(rel));
Assert(key->heapkeyspace || key->scantid == NULL);
/*
* Force result ">" if target item is first data item on an internal page
@ -567,6 +580,7 @@ _bt_compare(Relation rel,
return 1;
itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
ntupatts = BTreeTupleGetNAtts(itup, rel);
/*
* The scan key is set up with the attribute number associated with each
@ -580,8 +594,10 @@ _bt_compare(Relation rel,
* _bt_first).
*/
ncmpkey = Min(ntupatts, key->keysz);
Assert(key->heapkeyspace || ncmpkey == key->keysz);
scankey = key->scankeys;
for (int i = 1; i <= key->keysz; i++)
for (int i = 1; i <= ncmpkey; i++)
{
Datum datum;
bool isNull;
@ -632,8 +648,77 @@ _bt_compare(Relation rel,
scankey++;
}
/* if we get here, the keys are equal */
return 0;
/*
* All non-truncated attributes (other than heap TID) were found to be
* equal. Treat truncated attributes as minus infinity when scankey has a
* key attribute value that would otherwise be compared directly.
*
* Note: it doesn't matter if ntupatts includes non-key attributes;
* scankey won't, so explicitly excluding non-key attributes isn't
* necessary.
*/
if (key->keysz > ntupatts)
return 1;
/*
* Use the heap TID attribute and scantid to try to break the tie. The
* rules are the same as any other key attribute -- only the
* representation differs.
*/
heapTid = BTreeTupleGetHeapTID(itup);
if (key->scantid == NULL)
{
/*
* Most searches have a scankey that is considered greater than a
* truncated pivot tuple if and when the scankey has equal values for
* attributes up to and including the least significant untruncated
* attribute in tuple.
*
* For example, if an index has the minimum two attributes (single
* user key attribute, plus heap TID attribute), and a page's high key
* is ('foo', -inf), and scankey is ('foo', <omitted>), the search
* will not descend to the page to the left. The search will descend
* right instead. The truncated attribute in pivot tuple means that
* all non-pivot tuples on the page to the left are strictly < 'foo',
* so it isn't necessary to descend left. In other words, search
* doesn't have to descend left because it isn't interested in a match
* that has a heap TID value of -inf.
*
* However, some searches (pivotsearch searches) actually require that
* we descend left when this happens. -inf is treated as a possible
* match for omitted scankey attribute(s). This is needed by page
* deletion, which must re-find leaf pages that are targets for
* deletion using their high keys.
*
* Note: the heap TID part of the test ensures that scankey is being
* compared to a pivot tuple with one or more truncated key
* attributes.
*
* Note: pg_upgrade'd !heapkeyspace indexes must always descend to the
* left here, since they have no heap TID attribute (and cannot have
* any -inf key values in any case, since truncation can only remove
* non-key attributes). !heapkeyspace searches must always be
* prepared to deal with matches on both sides of the pivot once the
* leaf level is reached.
*/
if (key->heapkeyspace && !key->pivotsearch &&
key->keysz == ntupatts && heapTid == NULL)
return 1;
/* All provided scankey arguments found to be equal */
return 0;
}
/*
* Treat truncated heap TID as minus infinity, since scankey has a key
* attribute value (scantid) that would otherwise be compared directly
*/
Assert(key->keysz == IndexRelationGetNumberOfKeyAttributes(rel));
if (heapTid == NULL)
return 1;
Assert(ntupatts >= IndexRelationGetNumberOfKeyAttributes(rel));
return ItemPointerCompare(key->scantid, heapTid);
}
/*
@ -1148,7 +1233,10 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
}
/* Initialize remaining insertion scan key fields */
inskey.heapkeyspace = _bt_heapkeyspace(rel);
inskey.nextkey = nextkey;
inskey.pivotsearch = false;
inskey.scantid = NULL;
inskey.keysz = keysCount;
/*

View File

@ -755,6 +755,7 @@ _bt_sortaddtup(Page page,
{
trunctuple = *itup;
trunctuple.t_info = sizeof(IndexTupleData);
/* Deliberately zero INDEX_ALT_TID_MASK bits */
BTreeTupleSetNAtts(&trunctuple, 0);
itup = &trunctuple;
itemsize = sizeof(IndexTupleData);
@ -808,8 +809,6 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
OffsetNumber last_off;
Size pgspc;
Size itupsz;
int indnatts = IndexRelationGetNumberOfAttributes(wstate->index);
int indnkeyatts = IndexRelationGetNumberOfKeyAttributes(wstate->index);
/*
* This is a handy place to check for cancel interrupts during the btree
@ -826,27 +825,21 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
itupsz = MAXALIGN(itupsz);
/*
* Check whether the item can fit on a btree page at all. (Eventually, we
* ought to try to apply TOAST methods if not.) We actually need to be
* able to fit three items on every page, so restrict any one item to 1/3
* the per-page available space. Note that at this point, itupsz doesn't
* include the ItemId.
* Check whether the item can fit on a btree page at all.
*
* NOTE: similar code appears in _bt_insertonpg() to defend against
* oversize items being inserted into an already-existing index. But
* during creation of an index, we don't go through there.
* Every newly built index will treat heap TID as part of the keyspace,
* which imposes the requirement that new high keys must occasionally have
* a heap TID appended within _bt_truncate(). That may leave a new pivot
* tuple one or two MAXALIGN() quantums larger than the original first
* right tuple it's derived from. v4 deals with the problem by decreasing
* the limit on the size of tuples inserted on the leaf level by the same
* small amount. Enforce the new v4+ limit on the leaf level, and the old
* limit on internal levels, since pivot tuples may need to make use of
* the resered space. This should never fail on internal pages.
*/
if (itupsz > BTMaxItemSize(npage))
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("index row size %zu exceeds maximum %zu for index \"%s\"",
itupsz, BTMaxItemSize(npage),
RelationGetRelationName(wstate->index)),
errhint("Values larger than 1/3 of a buffer page cannot be indexed.\n"
"Consider a function index of an MD5 hash of the value, "
"or use full text indexing."),
errtableconstraint(wstate->heap,
RelationGetRelationName(wstate->index))));
if (unlikely(itupsz > BTMaxItemSize(npage)))
_bt_check_third_page(wstate->index, wstate->heap,
state->btps_level == 0, npage, itup);
/*
* Check to see if page is "full". It's definitely full if the item won't
@ -892,24 +885,35 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
ItemIdSetUnused(ii); /* redundant */
((PageHeader) opage)->pd_lower -= sizeof(ItemIdData);
if (indnkeyatts != indnatts && P_ISLEAF(opageop))
if (P_ISLEAF(opageop))
{
IndexTuple lastleft;
IndexTuple truncated;
Size truncsz;
/*
* Truncate any non-key attributes from high key on leaf level
* (i.e. truncate on leaf level if we're building an INCLUDE
* index). This is only done at the leaf level because downlinks
* Truncate away any unneeded attributes from high key on leaf
* level. This is only done at the leaf level because downlinks
* in internal pages are either negative infinity items, or get
* their contents from copying from one level down. See also:
* _bt_split().
*
* We don't try to bias our choice of split point to make it more
* likely that _bt_truncate() can truncate away more attributes,
* whereas the split point passed to _bt_split() is chosen much
* more delicately. Suffix truncation is mostly useful because it
* improves space utilization for workloads with random
* insertions. It doesn't seem worthwhile to add logic for
* choosing a split point here for a benefit that is bound to be
* much smaller.
*
* Since the truncated tuple is probably smaller than the
* original, it cannot just be copied in place (besides, we want
* to actually save space on the leaf page). We delete the
* original high key, and add our own truncated high key at the
* same offset.
* same offset. It's okay if the truncated tuple is slightly
* larger due to containing a heap TID value, since this case is
* known to _bt_check_third_page(), which reserves space.
*
* Note that the page layout won't be changed very much. oitup is
* already located at the physical beginning of tuple space, so we
@ -917,7 +921,11 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
* the latter portion of the space occupied by the original tuple.
* This is fairly cheap.
*/
truncated = _bt_nonkey_truncate(wstate->index, oitup);
ii = PageGetItemId(opage, OffsetNumberPrev(last_off));
lastleft = (IndexTuple) PageGetItem(opage, ii);
truncated = _bt_truncate(wstate->index, lastleft, oitup,
wstate->inskey);
truncsz = IndexTupleSize(truncated);
PageIndexTupleDelete(opage, P_HIKEY);
_bt_sortaddtup(opage, truncsz, truncated, P_HIKEY);
@ -936,8 +944,9 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
if (state->btps_next == NULL)
state->btps_next = _bt_pagestate(wstate, state->btps_level + 1);
Assert(BTreeTupleGetNAtts(state->btps_minkey, wstate->index) ==
IndexRelationGetNumberOfKeyAttributes(wstate->index) ||
Assert((BTreeTupleGetNAtts(state->btps_minkey, wstate->index) <=
IndexRelationGetNumberOfKeyAttributes(wstate->index) &&
BTreeTupleGetNAtts(state->btps_minkey, wstate->index) > 0) ||
P_LEFTMOST(opageop));
Assert(BTreeTupleGetNAtts(state->btps_minkey, wstate->index) == 0 ||
!P_LEFTMOST(opageop));
@ -982,7 +991,7 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
* the first item for a page is copied from the prior page in the code
* above. Since the minimum key for an entire level is only used as a
* minus infinity downlink, and never as a high key, there is no need to
* truncate away non-key attributes at this point.
* truncate away suffix attributes at this point.
*/
if (last_off == P_HIKEY)
{
@ -1041,8 +1050,9 @@ _bt_uppershutdown(BTWriteState *wstate, BTPageState *state)
}
else
{
Assert(BTreeTupleGetNAtts(s->btps_minkey, wstate->index) ==
IndexRelationGetNumberOfKeyAttributes(wstate->index) ||
Assert((BTreeTupleGetNAtts(s->btps_minkey, wstate->index) <=
IndexRelationGetNumberOfKeyAttributes(wstate->index) &&
BTreeTupleGetNAtts(s->btps_minkey, wstate->index) > 0) ||
P_LEFTMOST(opaque));
Assert(BTreeTupleGetNAtts(s->btps_minkey, wstate->index) == 0 ||
!P_LEFTMOST(opaque));
@ -1135,6 +1145,8 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)
}
else if (itup != NULL)
{
int32 compare = 0;
for (i = 1; i <= keysz; i++)
{
SortSupport entry;
@ -1142,7 +1154,6 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)
attrDatum2;
bool isNull1,
isNull2;
int32 compare;
entry = sortKeys + i - 1;
attrDatum1 = index_getattr(itup, i, tupdes, &isNull1);
@ -1159,6 +1170,20 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)
else if (compare < 0)
break;
}
/*
* If key values are equal, we sort on ItemPointer. This is
* required for btree indexes, since heap TID is treated as an
* implicit last key attribute in order to ensure that all
* keys in the index are physically unique.
*/
if (compare == 0)
{
compare = ItemPointerCompare(&itup->t_tid, &itup2->t_tid);
Assert(compare != 0);
if (compare > 0)
load1 = false;
}
}
else
load1 = false;

View File

@ -49,6 +49,8 @@ static void _bt_mark_scankey_required(ScanKey skey);
static bool _bt_check_rowcompare(ScanKey skey,
IndexTuple tuple, TupleDesc tupdesc,
ScanDirection dir, bool *continuescan);
static int _bt_keep_natts(Relation rel, IndexTuple lastleft,
IndexTuple firstright, BTScanInsert itup_key);
/*
@ -56,9 +58,26 @@ static bool _bt_check_rowcompare(ScanKey skey,
* Build an insertion scan key that contains comparison data from itup
* as well as comparator routines appropriate to the key datatypes.
*
* Result is intended for use with _bt_compare(). Callers that don't
* need to fill out the insertion scankey arguments (e.g. they use an
* ad-hoc comparison routine) can pass a NULL index tuple.
* When itup is a non-pivot tuple, the returned insertion scan key is
* suitable for finding a place for it to go on the leaf level. Pivot
* tuples can be used to re-find leaf page with matching high key, but
* then caller needs to set scan key's pivotsearch field to true. This
* allows caller to search for a leaf page with a matching high key,
* which is usually to the left of the first leaf page a non-pivot match
* might appear on.
*
* The result is intended for use with _bt_compare() and _bt_truncate().
* Callers that don't need to fill out the insertion scankey arguments
* (e.g. they use an ad-hoc comparison routine, or only need a scankey
* for _bt_truncate()) can pass a NULL index tuple. The scankey will
* be initialized as if an "all truncated" pivot tuple was passed
* instead.
*
* Note that we may occasionally have to share lock the metapage to
* determine whether or not the keys in the index are expected to be
* unique (i.e. if this is a "heapkeyspace" index). We assume a
* heapkeyspace index when caller passes a NULL tuple, allowing index
* build callers to avoid accessing the non-existent metapage.
*/
BTScanInsert
_bt_mkscankey(Relation rel, IndexTuple itup)
@ -79,13 +98,18 @@ _bt_mkscankey(Relation rel, IndexTuple itup)
Assert(tupnatts <= IndexRelationGetNumberOfAttributes(rel));
/*
* We'll execute search using scan key constructed on key columns. Non-key
* (INCLUDE index) columns are always omitted from scan keys.
* We'll execute search using scan key constructed on key columns.
* Truncated attributes and non-key attributes are omitted from the final
* scan key.
*/
key = palloc(offsetof(BTScanInsertData, scankeys) +
sizeof(ScanKeyData) * indnkeyatts);
key->heapkeyspace = itup == NULL || _bt_heapkeyspace(rel);
key->nextkey = false;
key->pivotsearch = false;
key->keysz = Min(indnkeyatts, tupnatts);
key->scantid = key->heapkeyspace && itup ?
BTreeTupleGetHeapTID(itup) : NULL;
skey = key->scankeys;
for (i = 0; i < indnkeyatts; i++)
{
@ -101,9 +125,9 @@ _bt_mkscankey(Relation rel, IndexTuple itup)
procinfo = index_getprocinfo(rel, i + 1, BTORDER_PROC);
/*
* Key arguments built when caller provides no tuple are
* defensively represented as NULL values. They should never be
* used.
* Key arguments built from truncated attributes (or when caller
* provides no tuple) are defensively represented as NULL values. They
* should never be used.
*/
if (i < tupnatts)
arg = index_getattr(itup, i + 1, itupdesc, &null);
@ -2041,38 +2065,234 @@ btproperty(Oid index_oid, int attno,
}
/*
* _bt_nonkey_truncate() -- create tuple without non-key suffix attributes.
* _bt_truncate() -- create tuple without unneeded suffix attributes.
*
* Returns truncated index tuple allocated in caller's memory context, with key
* attributes copied from caller's itup argument. Currently, suffix truncation
* is only performed to create pivot tuples in INCLUDE indexes, but some day it
* could be generalized to remove suffix attributes after the first
* distinguishing key attribute.
* Returns truncated pivot index tuple allocated in caller's memory context,
* with key attributes copied from caller's firstright argument. If rel is
* an INCLUDE index, non-key attributes will definitely be truncated away,
* since they're not part of the key space. More aggressive suffix
* truncation can take place when it's clear that the returned tuple does not
* need one or more suffix key attributes. We only need to keep firstright
* attributes up to and including the first non-lastleft-equal attribute.
* Caller's insertion scankey is used to compare the tuples; the scankey's
* argument values are not considered here.
*
* Truncated tuple is guaranteed to be no larger than the original, which is
* important for staying under the 1/3 of a page restriction on tuple size.
* Sometimes this routine will return a new pivot tuple that takes up more
* space than firstright, because a new heap TID attribute had to be added to
* distinguish lastleft from firstright. This should only happen when the
* caller is in the process of splitting a leaf page that has many logical
* duplicates, where it's unavoidable.
*
* Note that returned tuple's t_tid offset will hold the number of attributes
* present, so the original item pointer offset is not represented. Caller
* should only change truncated tuple's downlink.
* should only change truncated tuple's downlink. Note also that truncated
* key attributes are treated as containing "minus infinity" values by
* _bt_compare().
*
* In the worst case (when a heap TID is appended) the size of the returned
* tuple is the size of the first right tuple plus an additional MAXALIGN()'d
* item pointer. This guarantee is important, since callers need to stay
* under the 1/3 of a page restriction on tuple size. If this routine is ever
* taught to truncate within an attribute/datum, it will need to avoid
* returning an enlarged tuple to caller when truncation + TOAST compression
* ends up enlarging the final datum.
*/
IndexTuple
_bt_nonkey_truncate(Relation rel, IndexTuple itup)
_bt_truncate(Relation rel, IndexTuple lastleft, IndexTuple firstright,
BTScanInsert itup_key)
{
int nkeyattrs = IndexRelationGetNumberOfKeyAttributes(rel);
IndexTuple truncated;
TupleDesc itupdesc = RelationGetDescr(rel);
int16 natts = IndexRelationGetNumberOfAttributes(rel);
int16 nkeyatts = IndexRelationGetNumberOfKeyAttributes(rel);
int keepnatts;
IndexTuple pivot;
ItemPointer pivotheaptid;
Size newsize;
/*
* We should only ever truncate leaf index tuples, which must have both
* key and non-key attributes. It's never okay to truncate a second time.
* We should only ever truncate leaf index tuples. It's never okay to
* truncate a second time.
*/
Assert(BTreeTupleGetNAtts(itup, rel) ==
IndexRelationGetNumberOfAttributes(rel));
Assert(BTreeTupleGetNAtts(lastleft, rel) == natts);
Assert(BTreeTupleGetNAtts(firstright, rel) == natts);
truncated = index_truncate_tuple(RelationGetDescr(rel), itup, nkeyattrs);
BTreeTupleSetNAtts(truncated, nkeyattrs);
/* Determine how many attributes must be kept in truncated tuple */
keepnatts = _bt_keep_natts(rel, lastleft, firstright, itup_key);
return truncated;
#ifdef DEBUG_NO_TRUNCATE
/* Force truncation to be ineffective for testing purposes */
keepnatts = nkeyatts + 1;
#endif
if (keepnatts <= natts)
{
IndexTuple tidpivot;
pivot = index_truncate_tuple(itupdesc, firstright, keepnatts);
/*
* If there is a distinguishing key attribute within new pivot tuple,
* there is no need to add an explicit heap TID attribute
*/
if (keepnatts <= nkeyatts)
{
BTreeTupleSetNAtts(pivot, keepnatts);
return pivot;
}
/*
* Only truncation of non-key attributes was possible, since key
* attributes are all equal. It's necessary to add a heap TID
* attribute to the new pivot tuple.
*/
Assert(natts != nkeyatts);
newsize = IndexTupleSize(pivot) + MAXALIGN(sizeof(ItemPointerData));
tidpivot = palloc0(newsize);
memcpy(tidpivot, pivot, IndexTupleSize(pivot));
/* cannot leak memory here */
pfree(pivot);
pivot = tidpivot;
}
else
{
/*
* No truncation was possible, since key attributes are all equal.
* It's necessary to add a heap TID attribute to the new pivot tuple.
*/
Assert(natts == nkeyatts);
newsize = IndexTupleSize(firstright) + MAXALIGN(sizeof(ItemPointerData));
pivot = palloc0(newsize);
memcpy(pivot, firstright, IndexTupleSize(firstright));
}
/*
* We have to use heap TID as a unique-ifier in the new pivot tuple, since
* no non-TID key attribute in the right item readily distinguishes the
* right side of the split from the left side. Use enlarged space that
* holds a copy of first right tuple; place a heap TID value within the
* extra space that remains at the end.
*
* nbtree conceptualizes this case as an inability to truncate away any
* key attribute. We must use an alternative representation of heap TID
* within pivots because heap TID is only treated as an attribute within
* nbtree (e.g., there is no pg_attribute entry).
*/
Assert(itup_key->heapkeyspace);
pivot->t_info &= ~INDEX_SIZE_MASK;
pivot->t_info |= newsize;
/*
* Lehman & Yao use lastleft as the leaf high key in all cases, but don't
* consider suffix truncation. It seems like a good idea to follow that
* example in cases where no truncation takes place -- use lastleft's heap
* TID. (This is also the closest value to negative infinity that's
* legally usable.)
*/
pivotheaptid = (ItemPointer) ((char *) pivot + newsize -
sizeof(ItemPointerData));
ItemPointerCopy(&lastleft->t_tid, pivotheaptid);
/*
* Lehman and Yao require that the downlink to the right page, which is to
* be inserted into the parent page in the second phase of a page split be
* a strict lower bound on items on the right page, and a non-strict upper
* bound for items on the left page. Assert that heap TIDs follow these
* invariants, since a heap TID value is apparently needed as a
* tiebreaker.
*/
#ifndef DEBUG_NO_TRUNCATE
Assert(ItemPointerCompare(&lastleft->t_tid, &firstright->t_tid) < 0);
Assert(ItemPointerCompare(pivotheaptid, &lastleft->t_tid) >= 0);
Assert(ItemPointerCompare(pivotheaptid, &firstright->t_tid) < 0);
#else
/*
* Those invariants aren't guaranteed to hold for lastleft + firstright
* heap TID attribute values when they're considered here only because
* DEBUG_NO_TRUNCATE is defined (a heap TID is probably not actually
* needed as a tiebreaker). DEBUG_NO_TRUNCATE must therefore use a heap
* TID value that always works as a strict lower bound for items to the
* right. In particular, it must avoid using firstright's leading key
* attribute values along with lastleft's heap TID value when lastleft's
* TID happens to be greater than firstright's TID.
*/
ItemPointerCopy(&firstright->t_tid, pivotheaptid);
/*
* Pivot heap TID should never be fully equal to firstright. Note that
* the pivot heap TID will still end up equal to lastleft's heap TID when
* that's the only usable value.
*/
ItemPointerSetOffsetNumber(pivotheaptid,
OffsetNumberPrev(ItemPointerGetOffsetNumber(pivotheaptid)));
Assert(ItemPointerCompare(pivotheaptid, &firstright->t_tid) < 0);
#endif
BTreeTupleSetNAtts(pivot, nkeyatts);
BTreeTupleSetAltHeapTID(pivot);
return pivot;
}
/*
* _bt_keep_natts - how many key attributes to keep when truncating.
*
* Caller provides two tuples that enclose a split point. Caller's insertion
* scankey is used to compare the tuples; the scankey's argument values are
* not considered here.
*
* This can return a number of attributes that is one greater than the
* number of key attributes for the index relation. This indicates that the
* caller must use a heap TID as a unique-ifier in new pivot tuple.
*/
static int
_bt_keep_natts(Relation rel, IndexTuple lastleft, IndexTuple firstright,
BTScanInsert itup_key)
{
int nkeyatts = IndexRelationGetNumberOfKeyAttributes(rel);
TupleDesc itupdesc = RelationGetDescr(rel);
int keepnatts;
ScanKey scankey;
/*
* Be consistent about the representation of BTREE_VERSION 2/3 tuples
* across Postgres versions; don't allow new pivot tuples to have
* truncated key attributes there. _bt_compare() treats truncated key
* attributes as having the value minus infinity, which would break
* searches within !heapkeyspace indexes.
*/
if (!itup_key->heapkeyspace)
{
Assert(nkeyatts != IndexRelationGetNumberOfAttributes(rel));
return nkeyatts;
}
scankey = itup_key->scankeys;
keepnatts = 1;
for (int attnum = 1; attnum <= nkeyatts; attnum++, scankey++)
{
Datum datum1,
datum2;
bool isNull1,
isNull2;
datum1 = index_getattr(lastleft, attnum, itupdesc, &isNull1);
datum2 = index_getattr(firstright, attnum, itupdesc, &isNull2);
if (isNull1 != isNull2)
break;
if (!isNull1 &&
DatumGetInt32(FunctionCall2Coll(&scankey->sk_func,
scankey->sk_collation,
datum1,
datum2)) != 0)
break;
keepnatts++;
}
return keepnatts;
}
/*
@ -2086,15 +2306,17 @@ _bt_nonkey_truncate(Relation rel, IndexTuple itup)
* preferred to calling here. That's usually more convenient, and is always
* more explicit. Call here instead when offnum's tuple may be a negative
* infinity tuple that uses the pre-v11 on-disk representation, or when a low
* context check is appropriate.
* context check is appropriate. This routine is as strict as possible about
* what is expected on each version of btree.
*/
bool
_bt_check_natts(Relation rel, Page page, OffsetNumber offnum)
_bt_check_natts(Relation rel, bool heapkeyspace, Page page, OffsetNumber offnum)
{
int16 natts = IndexRelationGetNumberOfAttributes(rel);
int16 nkeyatts = IndexRelationGetNumberOfKeyAttributes(rel);
BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
IndexTuple itup;
int tupnatts;
/*
* We cannot reliably test a deleted or half-deleted page, since they have
@ -2114,16 +2336,26 @@ _bt_check_natts(Relation rel, Page page, OffsetNumber offnum)
"BT_N_KEYS_OFFSET_MASK can't fit INDEX_MAX_KEYS");
itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
tupnatts = BTreeTupleGetNAtts(itup, rel);
if (P_ISLEAF(opaque))
{
if (offnum >= P_FIRSTDATAKEY(opaque))
{
/*
* Leaf tuples that are not the page high key (non-pivot tuples)
* should never be truncated
* Non-pivot tuples currently never use alternative heap TID
* representation -- even those within heapkeyspace indexes
*/
return BTreeTupleGetNAtts(itup, rel) == natts;
if ((itup->t_info & INDEX_ALT_TID_MASK) != 0)
return false;
/*
* Leaf tuples that are not the page high key (non-pivot tuples)
* should never be truncated. (Note that tupnatts must have been
* inferred, rather than coming from an explicit on-disk
* representation.)
*/
return tupnatts == natts;
}
else
{
@ -2133,8 +2365,15 @@ _bt_check_natts(Relation rel, Page page, OffsetNumber offnum)
*/
Assert(!P_RIGHTMOST(opaque));
/* Page high key tuple contains only key attributes */
return BTreeTupleGetNAtts(itup, rel) == nkeyatts;
/*
* !heapkeyspace high key tuple contains only key attributes. Note
* that tupnatts will only have been explicitly represented in
* !heapkeyspace indexes that happen to have non-key attributes.
*/
if (!heapkeyspace)
return tupnatts == nkeyatts;
/* Use generic heapkeyspace pivot tuple handling */
}
}
else /* !P_ISLEAF(opaque) */
@ -2146,7 +2385,11 @@ _bt_check_natts(Relation rel, Page page, OffsetNumber offnum)
* its high key) is its negative infinity tuple. Negative
* infinity tuples are always truncated to zero attributes. They
* are a particular kind of pivot tuple.
*
*/
if (heapkeyspace)
return tupnatts == 0;
/*
* The number of attributes won't be explicitly represented if the
* negative infinity tuple was generated during a page split that
* occurred with a version of Postgres before v11. There must be
@ -2157,18 +2400,109 @@ _bt_check_natts(Relation rel, Page page, OffsetNumber offnum)
* Prior to v11, downlinks always had P_HIKEY as their offset. Use
* that to decide if the tuple is a pre-v11 tuple.
*/
return BTreeTupleGetNAtts(itup, rel) == 0 ||
return tupnatts == 0 ||
((itup->t_info & INDEX_ALT_TID_MASK) == 0 &&
ItemPointerGetOffsetNumber(&(itup->t_tid)) == P_HIKEY);
}
else
{
/*
* Tuple contains only key attributes despite on is it page high
* key or not
* !heapkeyspace downlink tuple with separator key contains only
* key attributes. Note that tupnatts will only have been
* explicitly represented in !heapkeyspace indexes that happen to
* have non-key attributes.
*/
return BTreeTupleGetNAtts(itup, rel) == nkeyatts;
if (!heapkeyspace)
return tupnatts == nkeyatts;
/* Use generic heapkeyspace pivot tuple handling */
}
}
/* Handle heapkeyspace pivot tuples (excluding minus infinity items) */
Assert(heapkeyspace);
/*
* Explicit representation of the number of attributes is mandatory with
* heapkeyspace index pivot tuples, regardless of whether or not there are
* non-key attributes.
*/
if ((itup->t_info & INDEX_ALT_TID_MASK) == 0)
return false;
/*
* Heap TID is a tiebreaker key attribute, so it cannot be untruncated
* when any other key attribute is truncated
*/
if (BTreeTupleGetHeapTID(itup) != NULL && tupnatts != nkeyatts)
return false;
/*
* Pivot tuple must have at least one untruncated key attribute (minus
* infinity pivot tuples are the only exception). Pivot tuples can never
* represent that there is a value present for a key attribute that
* exceeds pg_index.indnkeyatts for the index.
*/
return tupnatts > 0 && tupnatts <= nkeyatts;
}
/*
*
* _bt_check_third_page() -- check whether tuple fits on a btree page at all.
*
* We actually need to be able to fit three items on every page, so restrict
* any one item to 1/3 the per-page available space. Note that itemsz should
* not include the ItemId overhead.
*
* It might be useful to apply TOAST methods rather than throw an error here.
* Using out of line storage would break assumptions made by suffix truncation
* and by contrib/amcheck, though.
*/
void
_bt_check_third_page(Relation rel, Relation heap, bool needheaptidspace,
Page page, IndexTuple newtup)
{
Size itemsz;
BTPageOpaque opaque;
itemsz = MAXALIGN(IndexTupleSize(newtup));
/* Double check item size against limit */
if (itemsz <= BTMaxItemSize(page))
return;
/*
* Tuple is probably too large to fit on page, but it's possible that the
* index uses version 2 or version 3, or that page is an internal page, in
* which case a slightly higher limit applies.
*/
if (!needheaptidspace && itemsz <= BTMaxItemSizeNoHeapTid(page))
return;
/*
* Internal page insertions cannot fail here, because that would mean that
* an earlier leaf level insertion that should have failed didn't
*/
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
if (!P_ISLEAF(opaque))
elog(ERROR, "cannot insert oversized tuple of size %zu on internal page of index \"%s\"",
itemsz, RelationGetRelationName(rel));
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("index row size %zu exceeds btree version %u maximum %zu for index \"%s\"",
itemsz,
needheaptidspace ? BTREE_VERSION : BTREE_NOVAC_VERSION,
needheaptidspace ? BTMaxItemSize(page) :
BTMaxItemSizeNoHeapTid(page),
RelationGetRelationName(rel)),
errdetail("Index row references tuple (%u,%u) in relation \"%s\".",
ItemPointerGetBlockNumber(&newtup->t_tid),
ItemPointerGetOffsetNumber(&newtup->t_tid),
RelationGetRelationName(heap)),
errhint("Values larger than 1/3 of a buffer page cannot be indexed.\n"
"Consider a function index of an MD5 hash of the value, "
"or use full text indexing."),
errtableconstraint(heap, RelationGetRelationName(rel))));
}

View File

@ -103,7 +103,7 @@ _bt_restore_meta(XLogReaderState *record, uint8 block_id)
md = BTPageGetMeta(metapg);
md->btm_magic = BTREE_MAGIC;
md->btm_version = BTREE_VERSION;
md->btm_version = xlrec->version;
md->btm_root = xlrec->root;
md->btm_level = xlrec->level;
md->btm_fastroot = xlrec->fastroot;
@ -202,7 +202,7 @@ btree_xlog_insert(bool isleaf, bool ismeta, XLogReaderState *record)
}
static void
btree_xlog_split(bool onleft, bool lhighkey, XLogReaderState *record)
btree_xlog_split(bool onleft, XLogReaderState *record)
{
XLogRecPtr lsn = record->EndRecPtr;
xl_btree_split *xlrec = (xl_btree_split *) XLogRecGetData(record);
@ -213,8 +213,6 @@ btree_xlog_split(bool onleft, bool lhighkey, XLogReaderState *record)
BTPageOpaque ropaque;
char *datapos;
Size datalen;
IndexTuple left_hikey = NULL;
Size left_hikeysz = 0;
BlockNumber leftsib;
BlockNumber rightsib;
BlockNumber rnext;
@ -248,20 +246,6 @@ btree_xlog_split(bool onleft, bool lhighkey, XLogReaderState *record)
_bt_restore_page(rpage, datapos, datalen);
/*
* When the high key isn't present is the wal record, then we assume it to
* be equal to the first key on the right page. It must be from the leaf
* level.
*/
if (!lhighkey)
{
ItemId hiItemId = PageGetItemId(rpage, P_FIRSTDATAKEY(ropaque));
Assert(isleaf);
left_hikey = (IndexTuple) PageGetItem(rpage, hiItemId);
left_hikeysz = ItemIdGetLength(hiItemId);
}
PageSetLSN(rpage, lsn);
MarkBufferDirty(rbuf);
@ -282,8 +266,10 @@ btree_xlog_split(bool onleft, bool lhighkey, XLogReaderState *record)
Page lpage = (Page) BufferGetPage(lbuf);
BTPageOpaque lopaque = (BTPageOpaque) PageGetSpecialPointer(lpage);
OffsetNumber off;
IndexTuple newitem = NULL;
Size newitemsz = 0;
IndexTuple newitem,
left_hikey;
Size newitemsz,
left_hikeysz;
Page newlpage;
OffsetNumber leftoff;
@ -298,13 +284,10 @@ btree_xlog_split(bool onleft, bool lhighkey, XLogReaderState *record)
}
/* Extract left hikey and its size (assuming 16-bit alignment) */
if (lhighkey)
{
left_hikey = (IndexTuple) datapos;
left_hikeysz = MAXALIGN(IndexTupleSize(left_hikey));
datapos += left_hikeysz;
datalen -= left_hikeysz;
}
left_hikey = (IndexTuple) datapos;
left_hikeysz = MAXALIGN(IndexTupleSize(left_hikey));
datapos += left_hikeysz;
datalen -= left_hikeysz;
Assert(datalen == 0);
@ -1003,16 +986,10 @@ btree_redo(XLogReaderState *record)
btree_xlog_insert(false, true, record);
break;
case XLOG_BTREE_SPLIT_L:
btree_xlog_split(true, false, record);
break;
case XLOG_BTREE_SPLIT_L_HIGHKEY:
btree_xlog_split(true, true, record);
btree_xlog_split(true, record);
break;
case XLOG_BTREE_SPLIT_R:
btree_xlog_split(false, false, record);
break;
case XLOG_BTREE_SPLIT_R_HIGHKEY:
btree_xlog_split(false, true, record);
btree_xlog_split(false, record);
break;
case XLOG_BTREE_VACUUM:
btree_xlog_vacuum(record);

View File

@ -35,8 +35,6 @@ btree_desc(StringInfo buf, XLogReaderState *record)
}
case XLOG_BTREE_SPLIT_L:
case XLOG_BTREE_SPLIT_R:
case XLOG_BTREE_SPLIT_L_HIGHKEY:
case XLOG_BTREE_SPLIT_R_HIGHKEY:
{
xl_btree_split *xlrec = (xl_btree_split *) rec;
@ -130,12 +128,6 @@ btree_identify(uint8 info)
case XLOG_BTREE_SPLIT_R:
id = "SPLIT_R";
break;
case XLOG_BTREE_SPLIT_L_HIGHKEY:
id = "SPLIT_L_HIGHKEY";
break;
case XLOG_BTREE_SPLIT_R_HIGHKEY:
id = "SPLIT_R_HIGHKEY";
break;
case XLOG_BTREE_VACUUM:
id = "VACUUM";
break;

View File

@ -4057,9 +4057,10 @@ comparetup_index_btree(const SortTuple *a, const SortTuple *b,
}
/*
* If key values are equal, we sort on ItemPointer. This does not affect
* validity of the finished index, but it may be useful to have index
* scans in physical order.
* If key values are equal, we sort on ItemPointer. This is required for
* btree indexes, since heap TID is treated as an implicit last key
* attribute in order to ensure that all keys in the index are physically
* unique.
*/
{
BlockNumber blk1 = ItemPointerGetBlockNumber(&tuple1->t_tid);
@ -4076,6 +4077,9 @@ comparetup_index_btree(const SortTuple *a, const SortTuple *b,
return (pos1 < pos2) ? -1 : 1;
}
/* ItemPointer values should never be equal */
Assert(false);
return 0;
}
@ -4128,6 +4132,9 @@ comparetup_index_hash(const SortTuple *a, const SortTuple *b,
return (pos1 < pos2) ? -1 : 1;
}
/* ItemPointer values should never be equal */
Assert(false);
return 0;
}

View File

@ -112,18 +112,45 @@ typedef struct BTMetaPageData
#define BTPageGetMeta(p) \
((BTMetaPageData *) PageGetContents(p))
/*
* The current Btree version is 4. That's what you'll get when you create
* a new index.
*
* Btree version 3 was used in PostgreSQL v11. It is mostly the same as
* version 4, but heap TIDs were not part of the keyspace. Index tuples
* with duplicate keys could be stored in any order. We continue to
* support reading and writing Btree versions 2 and 3, so that they don't
* need to be immediately re-indexed at pg_upgrade. In order to get the
* new heapkeyspace semantics, however, a REINDEX is needed.
*
* Btree version 2 is mostly the same as version 3. There are two new
* fields in the metapage that were introduced in version 3. A version 2
* metapage will be automatically upgraded to version 3 on the first
* insert to it. INCLUDE indexes cannot use version 2.
*/
#define BTREE_METAPAGE 0 /* first page is meta */
#define BTREE_MAGIC 0x053162 /* magic number of btree pages */
#define BTREE_VERSION 3 /* current version number */
#define BTREE_MAGIC 0x053162 /* magic number in metapage */
#define BTREE_VERSION 4 /* current version number */
#define BTREE_MIN_VERSION 2 /* minimal supported version number */
#define BTREE_NOVAC_VERSION 3 /* minimal version with all meta fields */
/*
* Maximum size of a btree index entry, including its tuple header.
*
* We actually need to be able to fit three items on every page,
* so restrict any one item to 1/3 the per-page available space.
*
* There are rare cases where _bt_truncate() will need to enlarge
* a heap index tuple to make space for a tiebreaker heap TID
* attribute, which we account for here.
*/
#define BTMaxItemSize(page) \
MAXALIGN_DOWN((PageGetPageSize(page) - \
MAXALIGN(SizeOfPageHeaderData + \
3*sizeof(ItemIdData) + \
3*sizeof(ItemPointerData)) - \
MAXALIGN(sizeof(BTPageOpaqueData))) / 3)
#define BTMaxItemSizeNoHeapTid(page) \
MAXALIGN_DOWN((PageGetPageSize(page) - \
MAXALIGN(SizeOfPageHeaderData + 3*sizeof(ItemIdData)) - \
MAXALIGN(sizeof(BTPageOpaqueData))) / 3)
@ -166,12 +193,13 @@ typedef struct BTMetaPageData
/*
* Lehman and Yao's algorithm requires a ``high key'' on every non-rightmost
* page. The high key is not a data key, but gives info about what range of
* keys is supposed to be on this page. The high key on a page is required
* to be greater than or equal to any data key that appears on the page.
* If we find ourselves trying to insert a key > high key, we know we need
* to move right (this should only happen if the page was split since we
* examined the parent page).
* page. The high key is not a tuple that is used to visit the heap. It is
* a pivot tuple (see "Notes on B-Tree tuple format" below for definition).
* The high key on a page is required to be greater than or equal to any
* other key that appears on the page. If we find ourselves trying to
* insert a key that is strictly > high key, we know we need to move right
* (this should only happen if the page was split since we examined the
* parent page).
*
* Our insertion algorithm guarantees that we can use the initial least key
* on our right sibling as the high key. Once a page is created, its high
@ -187,38 +215,84 @@ typedef struct BTMetaPageData
#define P_FIRSTDATAKEY(opaque) (P_RIGHTMOST(opaque) ? P_HIKEY : P_FIRSTKEY)
/*
*
* Notes on B-Tree tuple format, and key and non-key attributes:
*
* INCLUDE B-Tree indexes have non-key attributes. These are extra
* attributes that may be returned by index-only scans, but do not influence
* the order of items in the index (formally, non-key attributes are not
* considered to be part of the key space). Non-key attributes are only
* present in leaf index tuples whose item pointers actually point to heap
* tuples. All other types of index tuples (collectively, "pivot" tuples)
* only have key attributes, since pivot tuples only ever need to represent
* how the key space is separated. In general, any B-Tree index that has
* more than one level (i.e. any index that does not just consist of a
* metapage and a single leaf root page) must have some number of pivot
* tuples, since pivot tuples are used for traversing the tree.
* tuples (non-pivot tuples). _bt_check_natts() enforces the rules
* described here.
*
* We store the number of attributes present inside pivot tuples by abusing
* their item pointer offset field, since pivot tuples never need to store a
* real offset (downlinks only need to store a block number). The offset
* field only stores the number of attributes when the INDEX_ALT_TID_MASK
* bit is set (we never assume that pivot tuples must explicitly store the
* number of attributes, and currently do not bother storing the number of
* attributes unless indnkeyatts actually differs from indnatts).
* INDEX_ALT_TID_MASK is only used for pivot tuples at present, though it's
* possible that it will be used within non-pivot tuples in the future. Do
* not assume that a tuple with INDEX_ALT_TID_MASK set must be a pivot
* tuple.
* Non-pivot tuple format:
*
* The 12 least significant offset bits are used to represent the number of
* attributes in INDEX_ALT_TID_MASK tuples, leaving 4 bits that are reserved
* for future use (BT_RESERVED_OFFSET_MASK bits). BT_N_KEYS_OFFSET_MASK should
* be large enough to store any number <= INDEX_MAX_KEYS.
* t_tid | t_info | key values | INCLUDE columns, if any
*
* t_tid points to the heap TID, which is a tiebreaker key column as of
* BTREE_VERSION 4. Currently, the INDEX_ALT_TID_MASK status bit is never
* set for non-pivot tuples.
*
* All other types of index tuples ("pivot" tuples) only have key columns,
* since pivot tuples only exist to represent how the key space is
* separated. In general, any B-Tree index that has more than one level
* (i.e. any index that does not just consist of a metapage and a single
* leaf root page) must have some number of pivot tuples, since pivot
* tuples are used for traversing the tree. Suffix truncation can omit
* trailing key columns when a new pivot is formed, which makes minus
* infinity their logical value. Since BTREE_VERSION 4 indexes treat heap
* TID as a trailing key column that ensures that all index tuples are
* physically unique, it is necessary to represent heap TID as a trailing
* key column in pivot tuples, though very often this can be truncated
* away, just like any other key column. (Actually, the heap TID is
* omitted rather than truncated, since its representation is different to
* the non-pivot representation.)
*
* Pivot tuple format:
*
* t_tid | t_info | key values | [heap TID]
*
* We store the number of columns present inside pivot tuples by abusing
* their t_tid offset field, since pivot tuples never need to store a real
* offset (downlinks only need to store a block number in t_tid). The
* offset field only stores the number of columns/attributes when the
* INDEX_ALT_TID_MASK bit is set, which doesn't count the trailing heap
* TID column sometimes stored in pivot tuples -- that's represented by
* the presence of BT_HEAP_TID_ATTR. The INDEX_ALT_TID_MASK bit in t_info
* is always set on BTREE_VERSION 4. BT_HEAP_TID_ATTR can only be set on
* BTREE_VERSION 4.
*
* In version 3 indexes, the INDEX_ALT_TID_MASK flag might not be set in
* pivot tuples. In that case, the number of key columns is implicitly
* the same as the number of key columns in the index. It is not usually
* set on version 2 indexes, which predate the introduction of INCLUDE
* indexes. (Only explicitly truncated pivot tuples explicitly represent
* the number of key columns on versions 2 and 3, whereas all pivot tuples
* are formed using truncation on version 4. A version 2 index will have
* it set for an internal page negative infinity item iff internal page
* split occurred after upgrade to Postgres 11+.)
*
* The 12 least significant offset bits from t_tid are used to represent
* the number of columns in INDEX_ALT_TID_MASK tuples, leaving 4 status
* bits (BT_RESERVED_OFFSET_MASK bits), 3 of which that are reserved for
* future use. BT_N_KEYS_OFFSET_MASK should be large enough to store any
* number of columns/attributes <= INDEX_MAX_KEYS.
*
* Note well: The macros that deal with the number of attributes in tuples
* assume that a tuple with INDEX_ALT_TID_MASK set must be a pivot tuple,
* and that a tuple without INDEX_ALT_TID_MASK set must be a non-pivot
* tuple (or must have the same number of attributes as the index has
* generally in the case of !heapkeyspace indexes). They will need to be
* updated if non-pivot tuples ever get taught to use INDEX_ALT_TID_MASK
* for something else.
*/
#define INDEX_ALT_TID_MASK INDEX_AM_RESERVED_BIT
/* Item pointer offset bits */
#define BT_RESERVED_OFFSET_MASK 0xF000
#define BT_N_KEYS_OFFSET_MASK 0x0FFF
#define BT_HEAP_TID_ATTR 0x1000
/* Get/set downlink block number */
#define BTreeInnerTupleGetDownLink(itup) \
@ -241,14 +315,16 @@ typedef struct BTMetaPageData
} while(0)
/*
* Get/set number of attributes within B-tree index tuple. Asserts should be
* removed when BT_RESERVED_OFFSET_MASK bits will be used.
* Get/set number of attributes within B-tree index tuple.
*
* Note that this does not include an implicit tiebreaker heap-TID
* attribute, if any. Note also that the number of key attributes must be
* explicitly represented in all heapkeyspace pivot tuples.
*/
#define BTreeTupleGetNAtts(itup, rel) \
( \
(itup)->t_info & INDEX_ALT_TID_MASK ? \
( \
AssertMacro((ItemPointerGetOffsetNumberNoCheck(&(itup)->t_tid) & BT_RESERVED_OFFSET_MASK) == 0), \
ItemPointerGetOffsetNumberNoCheck(&(itup)->t_tid) & BT_N_KEYS_OFFSET_MASK \
) \
: \
@ -257,10 +333,34 @@ typedef struct BTMetaPageData
#define BTreeTupleSetNAtts(itup, n) \
do { \
(itup)->t_info |= INDEX_ALT_TID_MASK; \
Assert(((n) & BT_RESERVED_OFFSET_MASK) == 0); \
ItemPointerSetOffsetNumber(&(itup)->t_tid, (n) & BT_N_KEYS_OFFSET_MASK); \
} while(0)
/*
* Get tiebreaker heap TID attribute, if any. Macro works with both pivot
* and non-pivot tuples, despite differences in how heap TID is represented.
*/
#define BTreeTupleGetHeapTID(itup) \
( \
(itup)->t_info & INDEX_ALT_TID_MASK && \
(ItemPointerGetOffsetNumberNoCheck(&(itup)->t_tid) & BT_HEAP_TID_ATTR) != 0 ? \
( \
(ItemPointer) (((char *) (itup) + IndexTupleSize(itup)) - \
sizeof(ItemPointerData)) \
) \
: (itup)->t_info & INDEX_ALT_TID_MASK ? NULL : (ItemPointer) &((itup)->t_tid) \
)
/*
* Set the heap TID attribute for a tuple that uses the INDEX_ALT_TID_MASK
* representation (currently limited to pivot tuples)
*/
#define BTreeTupleSetAltHeapTID(itup) \
do { \
Assert((itup)->t_info & INDEX_ALT_TID_MASK); \
ItemPointerSetOffsetNumber(&(itup)->t_tid, \
ItemPointerGetOffsetNumberNoCheck(&(itup)->t_tid) | BT_HEAP_TID_ATTR); \
} while(0)
/*
* Operator strategy numbers for B-tree have been moved to access/stratnum.h,
* because many places need to use them in ScanKeyInit() calls.
@ -325,20 +425,42 @@ typedef BTStackData *BTStack;
* be confused with a search scankey). It's used to descend a B-Tree using
* _bt_search.
*
* heapkeyspace indicates if we expect all keys in the index to be physically
* unique because heap TID is used as a tiebreaker attribute, and if index may
* have truncated key attributes in pivot tuples. This is actually a property
* of the index relation itself (not an indexscan). heapkeyspace indexes are
* indexes whose version is >= version 4. It's convenient to keep this close
* by, rather than accessing the metapage repeatedly.
*
* When nextkey is false (the usual case), _bt_search and _bt_binsrch will
* locate the first item >= scankey. When nextkey is true, they will locate
* the first item > scan key.
*
* scankeys is an array of scan key entries for attributes that are compared.
* keysz is the size of the array. During insertion, there must be a scan key
* for every attribute, but when starting a regular index scan some can be
* omitted. The array is used as a flexible array member, though it's sized
* in a way that makes it possible to use stack allocations. See
* nbtree/README for full details.
* pivotsearch is set to true by callers that want to re-find a leaf page
* using a scankey built from a leaf page's high key. Most callers set this
* to false.
*
* scantid is the heap TID that is used as a final tiebreaker attribute. It
* is set to NULL when index scan doesn't need to find a position for a
* specific physical tuple. Must be set when inserting new tuples into
* heapkeyspace indexes, since every tuple in the tree unambiguously belongs
* in one exact position (it's never set with !heapkeyspace indexes, though).
* Despite the representational difference, nbtree search code considers
* scantid to be just another insertion scankey attribute.
*
* scankeys is an array of scan key entries for attributes that are compared
* before scantid (user-visible attributes). keysz is the size of the array.
* During insertion, there must be a scan key for every attribute, but when
* starting a regular index scan some can be omitted. The array is used as a
* flexible array member, though it's sized in a way that makes it possible to
* use stack allocations. See nbtree/README for full details.
*/
typedef struct BTScanInsertData
{
bool heapkeyspace;
bool nextkey;
bool pivotsearch;
ItemPointer scantid; /* tiebreaker for scankeys */
int keysz; /* Size of scankeys array */
ScanKeyData scankeys[INDEX_MAX_KEYS]; /* Must appear last */
} BTScanInsertData;
@ -599,6 +721,7 @@ extern void _bt_upgrademetapage(Page page);
extern Buffer _bt_getroot(Relation rel, int access);
extern Buffer _bt_gettrueroot(Relation rel);
extern int _bt_getrootheight(Relation rel);
extern bool _bt_heapkeyspace(Relation rel);
extern void _bt_checkpage(Relation rel, Buffer buf);
extern Buffer _bt_getbuf(Relation rel, BlockNumber blkno, int access);
extern Buffer _bt_relandgetbuf(Relation rel, Buffer obuf,
@ -652,8 +775,12 @@ extern bytea *btoptions(Datum reloptions, bool validate);
extern bool btproperty(Oid index_oid, int attno,
IndexAMProperty prop, const char *propname,
bool *res, bool *isnull);
extern IndexTuple _bt_nonkey_truncate(Relation rel, IndexTuple itup);
extern bool _bt_check_natts(Relation rel, Page page, OffsetNumber offnum);
extern IndexTuple _bt_truncate(Relation rel, IndexTuple lastleft,
IndexTuple firstright, BTScanInsert itup_key);
extern bool _bt_check_natts(Relation rel, bool heapkeyspace, Page page,
OffsetNumber offnum);
extern void _bt_check_third_page(Relation rel, Relation heap,
bool needheaptidspace, Page page, IndexTuple newtup);
/*
* prototypes for functions in nbtvalidate.c

View File

@ -28,8 +28,7 @@
#define XLOG_BTREE_INSERT_META 0x20 /* same, plus update metapage */
#define XLOG_BTREE_SPLIT_L 0x30 /* add index tuple with split */
#define XLOG_BTREE_SPLIT_R 0x40 /* as above, new item on right */
#define XLOG_BTREE_SPLIT_L_HIGHKEY 0x50 /* as above, include truncated highkey */
#define XLOG_BTREE_SPLIT_R_HIGHKEY 0x60 /* as above, include truncated highkey */
/* 0x50 and 0x60 are unused */
#define XLOG_BTREE_DELETE 0x70 /* delete leaf index tuples for a page */
#define XLOG_BTREE_UNLINK_PAGE 0x80 /* delete a half-dead page */
#define XLOG_BTREE_UNLINK_PAGE_META 0x90 /* same, and update metapage */
@ -47,6 +46,7 @@
*/
typedef struct xl_btree_metadata
{
uint32 version;
BlockNumber root;
uint32 level;
BlockNumber fastroot;
@ -80,27 +80,30 @@ typedef struct xl_btree_insert
* whole page image. The left page, however, is handled in the normal
* incremental-update fashion.
*
* Note: the four XLOG_BTREE_SPLIT xl_info codes all use this data record.
* The _L and _R variants indicate whether the inserted tuple went into the
* left or right split page (and thus, whether newitemoff and the new item
* are stored or not). The _HIGHKEY variants indicate that we've logged
* explicitly left page high key value, otherwise redo should use right page
* leftmost key as a left page high key. _HIGHKEY is specified for internal
* pages where right page leftmost key is suppressed, and for leaf pages
* of covering indexes where high key have non-key attributes truncated.
* Note: XLOG_BTREE_SPLIT_L and XLOG_BTREE_SPLIT_R share this data record.
* There are two variants to indicate whether the inserted tuple went into the
* left or right split page (and thus, whether newitemoff and the new item are
* stored or not). We always log the left page high key because suffix
* truncation can generate a new leaf high key using user-defined code. This
* is also necessary on internal pages, since the first right item that the
* left page's high key was based on will have been truncated to zero
* attributes in the right page (the original is unavailable from the right
* page).
*
* Backup Blk 0: original page / new left page
*
* The left page's data portion contains the new item, if it's the _L variant.
* (In the _R variants, the new item is one of the right page's tuples.)
* If level > 0, an IndexTuple representing the HIKEY of the left page
* follows. We don't need this on leaf pages, because it's the same as the
* leftmost key in the new right page.
* An IndexTuple representing the high key of the left page must follow with
* either variant.
*
* Backup Blk 1: new right page
*
* The right page's data portion contains the right page's tuples in the
* form used by _bt_restore_page.
* The right page's data portion contains the right page's tuples in the form
* used by _bt_restore_page. This includes the new item, if it's the _R
* variant. The right page's tuples also include the right page's high key
* with either variant (moved from the left/original page during the split),
* unless the split happened to be of the rightmost page on its level, where
* there is no high key for new right page.
*
* Backup Blk 2: next block (orig page's rightlink), if any
* Backup Blk 3: child's left sibling, if non-leaf split

View File

@ -199,28 +199,22 @@ reset enable_seqscan;
reset enable_indexscan;
reset enable_bitmapscan;
--
-- Test B-tree page deletion. In particular, deleting a non-leaf page.
-- Test B-tree fast path (cache rightmost leaf page) optimization.
--
-- First create a tree that's at least four levels deep. The text inserted
-- is long and poorly compressible. That way only a few index tuples fit on
-- each page, allowing us to get a tall tree with fewer pages.
-- First create a tree that's at least three levels deep (i.e. has one level
-- between the root and leaf levels). The text inserted is long. It won't be
-- compressed because we use plain storage in the table. Only a few index
-- tuples fit on each internal page, allowing us to get a tall tree with few
-- pages. (A tall tree is required to trigger caching.)
--
-- The text column must be the leading column in the index, since suffix
-- truncation would otherwise truncate tuples on internal pages, leaving us
-- with a short tree.
create table btree_tall_tbl(id int4, t text);
create index btree_tall_idx on btree_tall_tbl (id, t) with (fillfactor = 10);
insert into btree_tall_tbl
select g, g::text || '_' ||
(select string_agg(md5(i::text), '_') from generate_series(1, 50) i)
from generate_series(1, 100) g;
-- Delete most entries, and vacuum. This causes page deletions.
delete from btree_tall_tbl where id < 950;
vacuum btree_tall_tbl;
--
-- Test B-tree insertion with a metapage update (XLOG_BTREE_INSERT_META
-- WAL record type). This happens when a "fast root" page is split.
--
-- The vacuum above should've turned the leaf page into a fast root. We just
-- need to insert some rows to cause the fast root page to split.
insert into btree_tall_tbl (id, t)
select g, repeat('x', 100) from generate_series(1, 500) g;
alter table btree_tall_tbl alter COLUMN t set storage plain;
create index btree_tall_idx on btree_tall_tbl (t, id) with (fillfactor = 10);
insert into btree_tall_tbl select g, repeat('x', 250)
from generate_series(1, 130) g;
--
-- Test vacuum_cleanup_index_scale_factor
--

View File

@ -3225,11 +3225,22 @@ explain (costs off)
CREATE TABLE delete_test_table (a bigint, b bigint, c bigint, d bigint);
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,80000) i;
ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);
-- Delete many entries, and vacuum. This causes page deletions.
DELETE FROM delete_test_table WHERE a > 40000;
VACUUM delete_test_table;
DELETE FROM delete_test_table WHERE a > 10;
-- Delete most entries, and vacuum, deleting internal pages and creating "fast
-- root"
DELETE FROM delete_test_table WHERE a < 79990;
VACUUM delete_test_table;
--
-- Test B-tree insertion with a metapage update (XLOG_BTREE_INSERT_META
-- WAL record type). This happens when a "fast root" page is split. This
-- also creates coverage for nbtree FSM page recycling.
--
-- The vacuum above should've turned the leaf page into a fast root. We just
-- need to insert some rows to cause the fast root page to split.
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,1000) i;
--
-- REINDEX (VERBOSE)
--
CREATE TABLE reindex_verbose(id integer primary key);

View File

@ -128,9 +128,9 @@ FROM pg_type JOIN pg_class c ON typrelid = c.oid WHERE typname = 'deptest_t';
-- doesn't work: grant still exists
DROP USER regress_dep_user1;
ERROR: role "regress_dep_user1" cannot be dropped because some objects depend on it
DETAIL: owner of default privileges on new relations belonging to role regress_dep_user1 in schema deptest
DETAIL: privileges for table deptest1
privileges for database regression
privileges for table deptest1
owner of default privileges on new relations belonging to role regress_dep_user1 in schema deptest
DROP OWNED BY regress_dep_user1;
DROP USER regress_dep_user1;
\set VERBOSITY terse

View File

@ -187,9 +187,9 @@ ERROR: event trigger "regress_event_trigger" does not exist
-- should fail, regress_evt_user owns some objects
drop role regress_evt_user;
ERROR: role "regress_evt_user" cannot be dropped because some objects depend on it
DETAIL: owner of event trigger regress_event_trigger3
DETAIL: owner of user mapping for regress_evt_user on server useless_server
owner of default privileges on new relations belonging to role regress_evt_user
owner of user mapping for regress_evt_user on server useless_server
owner of event trigger regress_event_trigger3
-- cleanup before next test
-- these are all OK; the second one should emit a NOTICE
drop event trigger if exists regress_event_trigger2;

View File

@ -441,8 +441,8 @@ ALTER SERVER s1 OWNER TO regress_test_indirect;
RESET ROLE;
DROP ROLE regress_test_indirect; -- ERROR
ERROR: role "regress_test_indirect" cannot be dropped because some objects depend on it
DETAIL: owner of server s1
privileges for foreign-data wrapper foo
DETAIL: privileges for foreign-data wrapper foo
owner of server s1
\des+
List of foreign servers
Name | Owner | Foreign-data wrapper | Access privileges | Type | Version | FDW options | Description
@ -1995,16 +1995,13 @@ ERROR: cannot attach a permanent relation as partition of temporary relation "t
DROP FOREIGN TABLE foreign_part;
DROP TABLE temp_parted;
-- Cleanup
\set VERBOSITY terse
DROP SCHEMA foreign_schema CASCADE;
DROP ROLE regress_test_role; -- ERROR
ERROR: role "regress_test_role" cannot be dropped because some objects depend on it
DETAIL: privileges for server s4
privileges for foreign-data wrapper foo
owner of user mapping for regress_test_role on server s6
DROP SERVER t1 CASCADE;
NOTICE: drop cascades to user mapping for public on server t1
DROP USER MAPPING FOR regress_test_role SERVER s6;
\set VERBOSITY terse
DROP FOREIGN DATA WRAPPER foo CASCADE;
NOTICE: drop cascades to 5 other objects
DROP SERVER s8 CASCADE;

View File

@ -3503,8 +3503,8 @@ SELECT refclassid::regclass, deptype
SAVEPOINT q;
DROP ROLE regress_rls_eve; --fails due to dependency on POLICY p
ERROR: role "regress_rls_eve" cannot be dropped because some objects depend on it
DETAIL: target of policy p on table tbl1
privileges for table tbl1
DETAIL: privileges for table tbl1
target of policy p on table tbl1
ROLLBACK TO q;
ALTER POLICY p ON tbl1 TO regress_rls_frank USING (true);
SAVEPOINT q;

View File

@ -84,32 +84,23 @@ reset enable_indexscan;
reset enable_bitmapscan;
--
-- Test B-tree page deletion. In particular, deleting a non-leaf page.
-- Test B-tree fast path (cache rightmost leaf page) optimization.
--
-- First create a tree that's at least four levels deep. The text inserted
-- is long and poorly compressible. That way only a few index tuples fit on
-- each page, allowing us to get a tall tree with fewer pages.
-- First create a tree that's at least three levels deep (i.e. has one level
-- between the root and leaf levels). The text inserted is long. It won't be
-- compressed because we use plain storage in the table. Only a few index
-- tuples fit on each internal page, allowing us to get a tall tree with few
-- pages. (A tall tree is required to trigger caching.)
--
-- The text column must be the leading column in the index, since suffix
-- truncation would otherwise truncate tuples on internal pages, leaving us
-- with a short tree.
create table btree_tall_tbl(id int4, t text);
create index btree_tall_idx on btree_tall_tbl (id, t) with (fillfactor = 10);
insert into btree_tall_tbl
select g, g::text || '_' ||
(select string_agg(md5(i::text), '_') from generate_series(1, 50) i)
from generate_series(1, 100) g;
-- Delete most entries, and vacuum. This causes page deletions.
delete from btree_tall_tbl where id < 950;
vacuum btree_tall_tbl;
--
-- Test B-tree insertion with a metapage update (XLOG_BTREE_INSERT_META
-- WAL record type). This happens when a "fast root" page is split.
--
-- The vacuum above should've turned the leaf page into a fast root. We just
-- need to insert some rows to cause the fast root page to split.
insert into btree_tall_tbl (id, t)
select g, repeat('x', 100) from generate_series(1, 500) g;
alter table btree_tall_tbl alter COLUMN t set storage plain;
create index btree_tall_idx on btree_tall_tbl (t, id) with (fillfactor = 10);
insert into btree_tall_tbl select g, repeat('x', 250)
from generate_series(1, 130) g;
--
-- Test vacuum_cleanup_index_scale_factor

View File

@ -1146,11 +1146,23 @@ explain (costs off)
CREATE TABLE delete_test_table (a bigint, b bigint, c bigint, d bigint);
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,80000) i;
ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);
-- Delete many entries, and vacuum. This causes page deletions.
DELETE FROM delete_test_table WHERE a > 40000;
VACUUM delete_test_table;
DELETE FROM delete_test_table WHERE a > 10;
-- Delete most entries, and vacuum, deleting internal pages and creating "fast
-- root"
DELETE FROM delete_test_table WHERE a < 79990;
VACUUM delete_test_table;
--
-- Test B-tree insertion with a metapage update (XLOG_BTREE_INSERT_META
-- WAL record type). This happens when a "fast root" page is split. This
-- also creates coverage for nbtree FSM page recycling.
--
-- The vacuum above should've turned the leaf page into a fast root. We just
-- need to insert some rows to cause the fast root page to split.
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,1000) i;
--
-- REINDEX (VERBOSE)
--

View File

@ -805,11 +805,11 @@ DROP FOREIGN TABLE foreign_part;
DROP TABLE temp_parted;
-- Cleanup
\set VERBOSITY terse
DROP SCHEMA foreign_schema CASCADE;
DROP ROLE regress_test_role; -- ERROR
DROP SERVER t1 CASCADE;
DROP USER MAPPING FOR regress_test_role SERVER s6;
\set VERBOSITY terse
DROP FOREIGN DATA WRAPPER foo CASCADE;
DROP SERVER s8 CASCADE;
\set VERBOSITY default