Re: Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?
От | Scott Marlowe |
---|---|
Тема | Re: Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..? |
Дата | |
Msg-id | dcc563d10801091109kf80ce2dh97b8e4a548f5168f@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..? (Steve Midgley <public@misuse.org>) |
Ответы |
Re: Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?
|
Список | pgsql-sql |
On Jan 9, 2008 12:20 PM, Steve Midgley <public@misuse.org> wrote: > This is kludgy but you would have some kind of random number test at > the start of the trigger - if it evals true once per every ten calls to > the trigger (say), you'd cut your delete statements execs by about 10x > and still periodically truncate every set of user rows fairly often. On > average you'd have ~55 rows per user, never less than 50 and a few > outliers with 60 or 70 rows before they get trimmed back down to 50.. > Seems more reliable than a cron job, and solves your problem of an ever > growing table? You could adjust the random number test easily if you > change your mind of the balance of size of table vs. # of delete > statements down the road. And, if you always through a limit 50 on the end of queries that retrieve data, you could let it grow quite a bit more than 60 or 70... Say 200. Then you could have it so that the random chopper function only gets kicked off every 100th or so time.
В списке pgsql-sql по дате отправления: