Sunday, September 28, 2008

Re: [PERFORM] Slow updates, poor IO

Thanks to everyone that responded.
I've done some benchmarking

checkpoint _segments=16 is fine, going to 64 made no improvement.
Using "update file set size=99" as a statement, but changing 99 on each
run..

With 32M shared memory, time in sec and leaving the system idle long
enough between runs for auto vacuum to complete.

415
421
470

The I decided to drop the Db and restore from a dump

1150
1500
1018
1071
1077
1140

Then I tried shared_mem=256M as suggested.

593
544

So thats made a big difference. vmstat showed a higher, more consistent,
IO level

I wondered why it slowed down after a restore. I thought it would
improve, less fragmentation
and all that. So I tried a reindex on all three indexes.

209
228

So thats it! lots of ram and reindex as part of standard operation.

Interestingly, the reindexing took about 16s each. The update on the
table with no indexes took about 48sec
So the aggregate time for each step would be about 230s. I take that as
being an indicator that it is
now maximally efficient.


The option of having more spindles for improved IO request processing
isn't feasible in most cases.
With the requirement for redundancy, we end with a lot of them, needing
an external enclosure.
They would have to be expensive SCSI/SAS/FC drives too, since SATA just
don't have the IO processing.

It will be interesting to see what happens when good performing SSD's
appear.

Meanwhile RAM is cheaper than that drive array!

It would be nice if thing like
* The effect of updates on indexed tables
* Fill Factor
* reindex after restore

Were mentioned in the 'performance' section of the manual, since that's
the part someone will go
to when looking for a solution.


Again, thanks to everyone,

--John


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

No comments: