Re: [PROPOSAL] Shared Ispell dictionaries
От | Arthur Zakirov |
---|---|
Тема | Re: [PROPOSAL] Shared Ispell dictionaries |
Дата | |
Msg-id | 4c321cb7-899f-0548-ecf4-5e965dd71335@postgrespro.ru обсуждение исходный текст |
Ответ на | Re: [PROPOSAL] Shared Ispell dictionaries (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: [PROPOSAL] Shared Ispell dictionaries
|
Список | pgsql-hackers |
On 21.02.2019 15:45, Robert Haas wrote: > On Wed, Feb 20, 2019 at 9:33 AM Arthur Zakirov <a.zakirov@postgrespro.ru> wrote: >> I'm working on the (b) approach. I thought about a priority queue >> structure. There no such ready structure within PostgreSQL sources >> except binaryheap.c, but it isn't for concurrent algorithms. > > I don't see why you need a priority queue or, really, any other fancy > data structure. It seems like all you need to do is somehow set it up > so that a backend which doesn't use a dictionary for a while will > dsm_detach() the segment. Eventually an unused dictionary will have > no remaining references and will go away. Hm, I didn't think in this way. Agree that using a new data structure is overengineering. Now in the current patch all DSM segments are pinned (and therefore dsm_pin_segment() is called). So a dictionary lives in shared memory even if nobody have the reference to it. I thought about periodically scanning the shared hash table and unpinning old and unused dictionaries. But this approach needs sequential scan facility for dshash. Happily there is the patch from Kyotaro-san (the v16-0001-sequential-scan-for-dshash.patch part): https://www.postgresql.org/message-id/20190221.160555.191280262.horiguchi.kyotaro@lab.ntt.co.jp Your approach looks simpler. It is necessary just to periodically scan dictionaries' cache hash table and not call dsm_pin_segment() when a DSM segment initialized. It also means that a dictionary is loaded into DSM only while there is a backend which attached the dictionary's DSM. -- Arthur Zakirov Postgres Professional: http://www.postgrespro.com Russian Postgres Company
В списке pgsql-hackers по дате отправления: