Fix incorrect calculation in BlockRefTableEntryGetBlocks.

The previous formula was incorrect in the case where the function's
nblocks argument was a multiple of BLOCKS_PER_CHUNK, which happens
whenever a relation segment file is exactly 512MB or exactly 1GB in
length. In such cases, the formula would calculate a stop_offset of
0 rather than 65536, resulting in modified blocks in the second half
of a 1GB file, or all the modified blocks in a 512MB file, being
omitted from the incremental backup.

Reported off-list by Tomas Vondra and Jakub Wartak.

Discussion: http://postgr.es/m/CA+TgmoYwy_KHp1-5GYNmVa=zdeJWhNH1T0SBmEuvqQNJEHj1Lw@mail.gmail.com
This commit is contained in:
Robert Haas 2024-04-05 13:39:29 -04:00
parent 079d94ab34
commit 55a5ee30cd
1 changed files with 5 additions and 1 deletions

View File

@ -410,7 +410,11 @@ BlockRefTableEntryGetBlocks(BlockRefTableEntry *entry,
if (chunkno == start_chunkno)
start_offset = start_blkno % BLOCKS_PER_CHUNK;
if (chunkno == stop_chunkno - 1)
stop_offset = stop_blkno % BLOCKS_PER_CHUNK;
{
Assert(stop_blkno > chunkno * BLOCKS_PER_CHUNK);
stop_offset = stop_blkno - (chunkno * BLOCKS_PER_CHUNK);
Assert(stop_offset <= BLOCKS_PER_CHUNK);
}
/*
* Handling differs depending on whether this is an array of offsets