parallel-checkout: send the new object_id algo field to the workers

An object_id storing a SHA-1 name has some unused bytes at the end of
the hash array. Since these bytes are not used, they are usually not
initialized to any value either. However, at
parallel_checkout.c:send_one_item() the object_id of a cache entry is
copied into a buffer which is later sent to a checkout worker through a
pipe write(). This makes Valgrind complain about passing uninitialized
bytes to a syscall. The worker won't use these uninitialized bytes
either, but the warning could confuse someone trying to debug this code;
So instead of using oidcpy(), send_one_item() uses hashcpy() to only
copy the used/initialized bytes of the object_id, and leave the
remaining part with zeros.

However, since cf0983213c ("hash: add an algo member to struct
object_id", 2021-04-26), using hashcpy() is no longer sufficient here as
it won't copy the new algo field from the object_id. Let's add and use a
new function which meets both our requirements of copying all the
important object_id data while still avoiding the uninitialized bytes,
by padding the end of the hash array in the destination object_id. With
this change, we also no longer need the destination buffer from
send_one_item() to be initialized with zeros, so let's switch from
xcalloc() to xmalloc() to make this clear.

Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit is contained in:
Matheus Tavares 2021-05-17 16:49:03 -03:00 committed by Junio C Hamano
parent bf949ade81
commit 3d20ed27b8
2 changed files with 22 additions and 7 deletions

16
hash.h
View File

@ -263,6 +263,22 @@ static inline void oidcpy(struct object_id *dst, const struct object_id *src)
dst->algo = src->algo;
}
/* Like oidcpy() but zero-pads the unused bytes in dst's hash array. */
static inline void oidcpy_with_padding(struct object_id *dst,
struct object_id *src)
{
size_t hashsz;
if (!src->algo)
hashsz = the_hash_algo->rawsz;
else
hashsz = hash_algos[src->algo].rawsz;
memcpy(dst->hash, src->hash, hashsz);
memset(dst->hash + hashsz, 0, GIT_MAX_RAWSZ - hashsz);
dst->algo = src->algo;
}
static inline struct object_id *oiddup(const struct object_id *src)
{
struct object_id *dst = xmalloc(sizeof(struct object_id));

View File

@ -411,7 +411,7 @@ static void send_one_item(int fd, struct parallel_checkout_item *pc_item)
len_data = sizeof(struct pc_item_fixed_portion) + name_len +
working_tree_encoding_len;
data = xcalloc(1, len_data);
data = xmalloc(len_data);
fixed_portion = (struct pc_item_fixed_portion *)data;
fixed_portion->id = pc_item->id;
@ -421,13 +421,12 @@ static void send_one_item(int fd, struct parallel_checkout_item *pc_item)
fixed_portion->name_len = name_len;
fixed_portion->working_tree_encoding_len = working_tree_encoding_len;
/*
* We use hashcpy() instead of oidcpy() because the hash[] positions
* after `the_hash_algo->rawsz` might not be initialized. And Valgrind
* would complain about passing uninitialized bytes to a syscall
* (write(2)). There is no real harm in this case, but the warning could
* hinder the detection of actual errors.
* We pad the unused bytes in the hash array because, otherwise,
* Valgrind would complain about passing uninitialized bytes to a
* write() syscall. The warning doesn't represent any real risk here,
* but it could hinder the detection of actual errors.
*/
hashcpy(fixed_portion->oid.hash, pc_item->ce->oid.hash);
oidcpy_with_padding(&fixed_portion->oid, &pc_item->ce->oid);
variant = data + sizeof(*fixed_portion);
if (working_tree_encoding_len) {