> On Friday 13 June 2008 12:58:22 Josh Berkus wrote:
> > I can see how this would be useful, but I can also see that it could be a
> > huge performance burden when activated. So it couldn't be part of the
> > standard statistics collection.
>
> A lower overhead way to get at this type of information is to quantize dtrace
> results over a specific period of time. Much nicer than doing the whole
> logging/analyze piece.
DTrace is disabled in most installation as default, and cannot be used in
some platforms (especially I want to use the feature in Linux). I think
DTrace is known as a tool for developers, but not for DBAs. However,
statement logging is required by DBAs who used to use STATSPACK in Oracle.
I will try to measure overheads of logging in some implementation:
1. Log statements and dump them into server logs.
2. Log statements and filter them before to be written.
3. Store statements in shared memory.
I know 1 is slow, but I don't know what part of it is really slow;
If the reason is to write statements into disks, 2 would be a solution.
3 will be needed if sending statements to loggger itself is the reason
of the overhead.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
No comments:
Post a Comment