8000 Further twiddling of nodeHash.c hashtable sizing calculation. · yinrcode/postgres@d637a89 · GitHub
[go: up one dir, main page]

Skip to content

Commit d637a89

Browse files
committed
Further twiddling of nodeHash.c hashtable sizing calculation.
On reflection, the submitted patch didn't really work to prevent the request size from exceeding MaxAllocSize, because of the fact that we'd happily round nbuckets up to the next power of 2 after we'd limited it to max_pointers. The simplest way to enforce the limit correctly is to round max_pointers down to a power of 2 when it isn't one already. (Note that the constraint to INT_MAX / 2, if it were doing anything useful at all, is properly applied after that.)
1 parent 2647b24 commit d637a89

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

src/backend/executor/nodeHash.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -396,6 +396,7 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
396396
long hash_table_bytes;
397397
long skew_table_bytes;
398398
long max_pointers;
399+
long mppow2;
399400
int nbatch;
400401
int nbuckets;
401402
int i;
@@ -463,7 +464,12 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
463464
*/
464465
max_pointers = (work_mem * 1024L) / sizeof(HashJoinTuple);
465466
max_pointers = Min(max_pointers, MaxAllocSize / sizeof(HashJoinTuple));
466-
/* also ensure we avoid integer overflow in nbatch and nbuckets */
467+
/* If max_pointers isn't a power of 2, must round it down to one */
468+
mppow2 = 1L << my_log2(max_pointers);
469+
if (max_pointers != mppow2)
470+
max_pointers = mppow2 / 2;
471+
472+
/* Also ensure we avoid integer overflow in nbatch and nbuckets */
467473
/* (this step is redundant given the current value of MaxAllocSize) */
468474
max_pointers = Min(max_pointers, INT_MAX / 2);
469475

0 commit comments

Comments
 (0)
0