Re: Deleting millions of rows
От | Robert Haas |
---|---|
Тема | Re: Deleting millions of rows |
Дата | |
Msg-id | 603c8f070902021038p1c9221aeo6c5cb921e24f85b0@mail.gmail.com обсуждение исходный текст |
Ответ на | Deleting millions of rows (Brian Cox <brian.cox@ca.com>) |
Список | pgsql-performance |
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox <brian.cox@ca.com> wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 > I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into > or queries on this > table performed significantly slower. I tried a vacuum analyze, but this > didn't help. To fix this, > I dumped and restored the database. > > 1) why can't postgres delete all rows in a table if it has millions of rows? > 2) is there any other way to restore performance other than restoring the > database? Does the table have triggers on it? Does it have indexes? What is the result of pg_relation_size() on that table? How much memory do you have in your machine? What is work_mem set to? Did you try VACUUM FULL instead of just plain VACUUM to recover performance? You might also need to REINDEX. Or you could TRUNCATE the table. ...Robert
В списке pgsql-performance по дате отправления: