Sunday, July 13, 2008

Re: [HACKERS] [PATCHES] VACUUM Improvements - WIP Patch

(taking the discussions to -hackers)

On Sat, Jul 12, 2008 at 11:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
>
> (2) It achieves speedup of VACUUM by pushing work onto subsequent
> regular accesses of the page, which is exactly the wrong thing.
> Worse, once you count the disk writes those accesses will induce it's
> not even clear that there's any genuine savings.
>

Well in the worst case that is true. But in most other cases, the
second pass work will be combined with other normal activities and the
overhead will be shared, at least there is a chance for that. I think
there is a chance for delaying the work until there is any real need
for that e.g. INSERT or UPDATE on the page which would require a free
line pointer.


> (3) The fact that it doesn't work until concurrent transactions have
> gone away makes it of extremely dubious value in real-world scenarios,
> as already noted by Simon.
>

If there are indeed long running concurrent transactions, we won't get
any benefit of this optimization. But then there are several more
common cases of very short concurrent transactions. In those cases and
for very large tables, reducing the vacuum time is a significant win.
The FSM will be written early and significant work of the VACUUM can
be finished quickly.

> It strikes me that what you are trying to do here is compensate for
> a bad decision in the HOT patch, which was to have VACUUM's first
> pass prune/defrag a page even when we know we are going to have to
> come back to that page later. What about trying to fix things so
> that if the page contains line pointers that need to be removed,
> the first pass doesn't dirty it at all, but leaves all the work
> to be done at the second visit? I think that since heap_page_prune
> has been refactored into a "scan" followed by an "apply", it'd be
> possible to decide before the "apply" step whether this is the case
> or not.
>

I am not against this idea. Just that it still requires us double scan
of the main table and that's exactly what we are trying to avoid with
this patch.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

No comments: