Re: Best COPY Performance
От | Worky Workerson |
---|---|
Тема | Re: Best COPY Performance |
Дата | |
Msg-id | ce4072df0610240617s72f35540odd3d4e0903313b63@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Best COPY Performance ("Jim C. Nasby" <jim@nasby.net>) |
Список | pgsql-performance |
> http://stats.distributed.net used to use a perl script to do some > transformations before loading data into the database. IIRC, when we > switched to using C we saw 100x improvement in speed, so I suspect that > if you want performance perl isn't the way to go. I think you can > compile perl into C, so maybe that would help some. Like Craig mentioned, I have never seen those sorts of improvements going from perl->C, and developer efficiency is primo for me. I've profiled most of the stuff, and have used XS modules and Inline::C on the appropriate, often used functions, but I still think that it comes down to my using CSV and Text::CSV_XS. Even though its XS, CSV is still a pain in the ass. > Ultimately, you might be best of using triggers instead of rules for the > partitioning since then you could use copy. Or go to raw insert commands > that are wrapped in a transaction. Eh, I've put the partition loading logic in the loader, which seems to work out pretty well, especially since I keep things sorted and am the only one inserting into the DB and do so with bulk loads. But I'll keep this in mind for later use. Thanks!
В списке pgsql-performance по дате отправления: