Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 20cb18d

Browse files
committed
Make catalog cache hash tables resizeable.
If the hash table backing a catalog cache becomes too full (fillfactor > 2), enlarge it. A new buckets array, double the size of the old, is allocated, and all entries in the old hash are moved to the right bucket in the new hash. This has two benefits. First, cache lookups don't get so expensive when there are lots of entries in a cache, like if you access hundreds of thousands of tables. Second, we can make the (initial) sizes of the caches much smaller, which saves memory. This patch dials down the initial sizes of the catcaches. The new sizes are chosen so that a backend that only runs a few basic queries still won't need to enlarge any of them.
1 parent b1892aa commit 20cb18d

File tree

3 files changed

+102
-58
lines changed

3 files changed

+102
-58
lines changed

src/backend/utils/cache/catcache.c

Lines changed: 48 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -734,9 +734,8 @@ InitCatCache(int id,
734734
int i;
735735

736736
/*
737-
* nbuckets is the number of hash buckets to use in this catcache.
738-
* Currently we just use a hard-wired estimate of an appropriate size for
739-
* each cache; maybe later make them dynamically resizable?
737+
* nbuckets is the initial number of hash buckets to use in this catcache.
738+
* It will be enlarged later if it becomes too full.
740739
*
741740
* nbuckets must be a power of two. We check this via Assert rather than
742741
* a full runtime check because the values will be coming from constant
@@ -775,7 +774,8 @@ InitCatCache(int id,
775774
*
776775
* Note: we rely on zeroing to initialize all the dlist headers correctly
777776
*/
778-
cp = (CatCache *) palloc0(sizeof(CatCache) + nbuckets * sizeof(dlist_head));
777+
cp = (CatCache *) palloc0(sizeof(CatCache));
778+
cp->cc_bucket = palloc0(nbuckets * sizeof(dlist_head));
779779

780780
/*
781781
* initialize the cache's relation information for the relation
@@ -813,6 +813,43 @@ InitCatCache(int id,
813813
return cp;
814814
}
815815

816+
/*
817+
* Enlarge a catcache, doubling the number of buckets.
818+
*/
819+
static void
820+
RehashCatCache(CatCache *cp)
821+
{
822+
dlist_head *newbucket;
823+
int newnbuckets;
824+
int i;
825+
826+
elog(DEBUG1, "rehashing catalog cache id %d for %s; %d tups, %d buckets",
827+
cp->id, cp->cc_relname, cp->cc_ntup, cp->cc_nbuckets);
828+
829+
/* Allocate a new, larger, hash table. */
830+
newnbuckets = cp->cc_nbuckets * 2;
831+
newbucket = (dlist_head *) MemoryContextAllocZero(CacheMemoryContext, newnbuckets * sizeof(dlist_head));
832+
833+
/* Move all entries from old hash table to new. */
834+
for (i = 0; i < cp->cc_nbuckets; i++)
835+
{
836+
dlist_mutable_iter iter;
837+
dlist_foreach_modify(iter, &cp->cc_bucket[i])
838+
{
839+
CatCTup *ct = dlist_container(CatCTup, cache_elem, iter.cur);
840+
int hashIndex = HASH_INDEX(ct->hash_value, newnbuckets);
841+
842+
dlist_delete(iter.cur);
843+
dlist_push_head(&newbucket[hashIndex], &ct->cache_elem);
844+
}
845+
}
846+
847+
/* Switch to the new array. */
848+
pfree(cp->cc_bucket);
849+
cp->cc_nbuckets = newnbuckets;
850+
cp->cc_bucket = newbucket;
851+
}
852+
816853
/*
817854
* CatalogCacheInitializeCache
818855
*
@@ -1684,6 +1721,13 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp,
16841721
cache->cc_ntup++;
16851722
CacheHdr->ch_ntup++;
16861723

1724+
/*
1725+
* If the hash table has become too full, enlarge the buckets array.
1726+
* Quite arbitrarily, we enlarge when fill factor > 2.
1727+
*/
1728+
if (cache->cc_ntup > cache->cc_nbuckets * 2)
1729+
RehashCatCache(cache);
1730+
16871731
return ct;
16881732
}
16891733

0 commit comments

Comments
 (0)