8000 Further twiddling of nodeHash.c hashtable sizing calculation. · highb/postgres@d84cc40 · GitHub
[go: up one dir, main page]

Skip to content

Commit d84cc40

Browse files
committed
Further twiddling of nodeHash.c hashtable sizing calculation.
On reflection, the submitted patch didn't really work to prevent the request size from exceeding MaxAllocSize, because of the fact that we'd happily round nbuckets up to the next power of 2 after we'd limited it to max_pointers. The simplest way to enforce the limit correctly is to round max_pointers down to a power of 2 when it isn't one already. (Note that the constraint to INT_MAX / 2, if it were doing anything useful at all, is properly applied after that.)
1 parent a8168fb commit d84cc40

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

src/backend/executor/nodeHash.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -398,6 +398,7 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
398398
long hash_table_bytes;
399399
long skew_table_bytes;
400400
long max_pointers;
401+
long mppow2;
401402
int nbatch;
402403
int nbuckets;
403404
int i;
@@ -465,7 +466,12 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
465466
*/
466467
max_pointers = (work_mem * 1024L) / sizeof(HashJoinTuple);
467468
max_pointers = Min(max_pointers, MaxAllocSize / sizeof(HashJoinTuple));
468-
/* also ensure we avoid integer overflow in nbatch and nbuckets */
469+
/* If max_pointers isn't a power of 2, must round it down to one */
470+
mppow2 = 1L << my_log2(max_pointers);
471+
if (max_pointers != mppow2)
472+
max_pointers = mppow2 / 2;
473+
474+
/* Also ensure we avoid integer overflow in nbatch and nbuckets */
469475
/* (this step is redundant given the current value of MaxAllocSize) */
470476
max_pointers = Min(max_pointers, INT_MAX / 2);
471477

0 commit comments

Comments
 (0)
0