Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit f7d0a31

Browse files
committed
Reduce initial size of RelfilenodeMapHash.
A test case provided by Mathieu Fenniak shows that hash_seq_search'ing this hashtable can consume a very significant amount of overhead during logical decoding, which triggers frequent cache invalidation. Testing suggests that the actual population of the hashtable is often no more than a few dozen entries, so we can cut the overhead just by dropping the initial number of buckets down from 1024 --- I chose to cut it to 64. (In situations where we do have a significant number of entries, we shouldn't get any real penalty from doing this, as the dynahash.c code will resize the hashtable automatically.) This gives a further factor-of-two savings in Mathieu's test case. That may be overly optimistic for real-world benefit, as real cases may have larger average table populations, but it's hard to see it turning into a net negative for any workload. Back-patch to 9.4 where relfilenodemap.c was introduced. Discussion: https://postgr.es/m/CAHoiPjzea6N0zuCi=+f9v_j94nfsy6y8SU7-=bp4=7qw6_i=Rg@mail.gmail.com
1 parent b7a98b1 commit f7d0a31

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/backend/utils/cache/relfilenodemap.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ InitializeRelfilenodeMap(void)
123123
* error.
124124
*/
125125
RelfilenodeMapHash =
126-
hash_create("RelfilenodeMap cache", 1024, &ctl,
126+
hash_create("RelfilenodeMap cache", 64, &ctl,
127127
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
128128

129129
/* Watch for invalidation events. */

0 commit comments

Comments
 (0)