I've done some benchmarking
checkpoint _segments=16 is fine, going to 64 made no improvement.
Using "update file set size=99" as a statement, but changing 99 on each
With 32M shared memory, time in sec and leaving the system idle long
enough between runs for auto vacuum to complete.
The I decided to drop the Db and restore from a dump
Then I tried shared_mem=256M as suggested.
So thats made a big difference. vmstat showed a higher, more consistent,
I wondered why it slowed down after a restore. I thought it would
improve, less fragmentation
and all that. So I tried a reindex on all three indexes.
So thats it! lots of ram and reindex as part of standard operation.
Interestingly, the reindexing took about 16s each. The update on the
table with no indexes took about 48sec
So the aggregate time for each step would be about 230s. I take that as
being an indicator that it is
now maximally efficient.
The option of having more spindles for improved IO request processing
isn't feasible in most cases.
With the requirement for redundancy, we end with a lot of them, needing
an external enclosure.
They would have to be expensive SCSI/SAS/FC drives too, since SATA just
don't have the IO processing.
It will be interesting to see what happens when good performing SSD's
Meanwhile RAM is cheaper than that drive array!
It would be nice if thing like
* The effect of updates on indexed tables
* Fill Factor
* reindex after restore
Were mentioned in the 'performance' section of the manual, since that's
the part someone will go
to when looking for a solution.
Again, thanks to everyone,
Sent via pgsql-performance mailing list (firstname.lastname@example.org)
To make changes to your subscription: