> Tx for your reply.
>
> You mean that RAID use fsync method for keeping data's copy.
No, Merlin means PostgreSQL will issue a sync to force WAL to actual disk.
> So you invite me to desactivate fsync to increase the performance ?
He means you might have to if you can't afford new hardware. Is disk
activity the problem? Have you looked at the output of "vmstat" to check?
> Desactivating fsync. my second disk will not be uptodate,
No - the RAID stuff is happening in the operating-system.
> so if the
> machine crash, I wont be able to get the server working quickly???
Not "quickly", perhaps not "at all".
> But
> if I use a second machine to replicate the database, I escape this
> problem isn't it ?
You reduce the chance of a single failure causing disaster.
> If I understand right, could you tell me how to do desactivate fsync
> please ?
There's an "fsync = on" setting in your postgresql.conf, but don't
change it yet.
> I have a database of 38Go and take 6Go per week.
What do you mean by "take 6Go per week"? You update/delete that much
data? It's growing by that amount each week?
> I have a lot of update and insert, especially in 8 tables. 2 tables are
> using for temporary storage, so I right something like 15000 request per
> 2 minutes and empty it into 10 min.
I'm not sure what "15000 request per 2 minutes and empty it into 10 min"
means.
Do you have 7500 requests per minute?
Are these updates?
To the "temporary storage"?
What is this "temporary storage" - an ordinary table?
> I'm making some update or select on tables including more than 20
> millions of entrance.
Again, I'm not sure what this means.
Oh - *important* - which version of PostgreSQL are you running?
Is an upgrade practical?
Looking at your postgresql.conf settings:
max_connections = 624
That's an odd number.
Do you usually have that many connections?
What are they doing? They can't all be active, the machine you've got
wouldn't cope.
shared_buffers = 250000
work_mem = 9000
temp_buffers = 500
These three are important. The shared_buffers are workspace shared
between all backends, and you've allocated about 2GB. You've also set
work_mem=9MB, which is how much each backend can use for a single sort.
That means it can use double or triple that in a complex query. If
you're using temporary tables, then you'll want to make sure the
temp_buffers setting is correct.
I can't say whether these figures are good or bad without knowing how
the database is being used.
effective_cache_size = 625000
That's around 5GB - is that roughly the amount of memory used for
caching (what does free -m say for buffers/cache)?
max_prepared_transactions = 200
Do you use a lot of prepared transactions in two-phase commit?
I'm guessing that you don't.
> I'm sure that it could be more optimised. I don't know any thing on
> WAL,
> autovacuum, fsm, bgwriter, kernel process, geqo or planner cost
> settings.
If you run a "vacuum verbose" it will recommend fsm settings at the end
of its output. I think you probably need to make your autovacuum more
aggressive, but that's something you'll be able to tell by monitoring
your database.
It's quite likely that Merlin's right, and you need better hardware to
cope with the number of updates you're making - that's something where
you need fast disks. However, he's just guessing because you've not told
us enough to tell where the problem really lies.
--
Richard Huxton
Archonet Ltd
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
No comments:
Post a Comment