Wednesday, July 30, 2008

Re: [HACKERS] window function v03 against HEAD

On Thu, Jul 31, 2008 at 02:46:29PM +0900, Hitoshi Harada wrote:
> 2008/7/31 David Fetter <david@fetter.org>:
> > On Thu, Jul 31, 2008 at 02:12:26PM +0900, Hitoshi Harada wrote:
> >> 2008/7/29 David Fetter <david@fetter.org>:
> >> > On Mon, Jul 28, 2008 at 07:25:55PM +0900, H.Harada wrote:
> >> >> I happily announce that the first design of window function was
> >> >> finished and the patch against HEAD is released online. See
> >> >> http://umitanuki.net/pgsql/wfv03/design.html
> >> >
> >> > I've put up a git repository at
> >> > <http://git.postgresql.org/?p=~davidfetter/window_functions/.git;a=summary>
> >>
> >> Thanks a lot.
> >> I have tried to get clone from the URL but it didn't work.
> >>
> >> $ git-clone http://git.postgresql.org/git/~davidfetter/window_functions/.git
> >> Initialized empty Git repository in /home/forcia/repo/window_functions/.git/
> >> Getting alternates list for
> >> http://git.postgresql.org/git/~davidfetter/window_functions/.git
> >> Getting pack list for
> >> http://git.postgresql.org/git/~davidfetter/window_functions/.git
> >> Getting index for pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
> >> Getting pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
> >> which contains c13ba377a6e58a05b5d9e39e36674af42126d48d
> >>
> >> here it stops and tells nothing. This occurs on both Linux and
> >> Windows clients.
> >
> > How long does it hang for?
> >
>
> Sorry, finally I got it. It took about an hour...

Sorry about that. Apparently, at least the way things are set up,
there's a *lot* of history you can rewind. Further changes should
move pretty quickly :)

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] archive_timeout, checkpoint_timeout

On Wed, 30 Jul 2008, Rob Adams wrote:

> Could someone please explain in layman's terms the implications of using a
> checkpoint_timeout of ~1min as well? Is it a bad idea?

Lowering checkpoint_timeout makes checkpoints more frequent, causing the
database to go through WAL segments (at 16MB each) more often. Since
those get reused as needed, the peak disk usage footprint of your server
shouldn't be any higher. However, churning through that extra disk space
and doing the checkpoint bookkeeping so often can cause your server
performance to suffer a bit during heavy activity. Make sure to watch
what the server looks like under peak load, you may discover that lowering
these timeouts so much can cause it to have more trouble keeping up.
That's the usual trade-off here; the more often you want to ship useful
copies of things to another server, the more processing and particularly
disk overhead goes along with that.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] what is less resource-intensive, WHERE id IN or INNER JOIN?

WHERE id IN will generally lead to faster query plans.  Often, much faster on large tables.

The inner join deals with duplicate values for id differently.  WHERE id IN ( subquery ) will be much more likely to choose a hash method for filtering the scan on table1.

I just ran into this today on a query of similar nature on a table with 2M rows (table1 below) joining on a table with 100k (table 2).  The speedup was about 10x -- 20 seconds to 2 seconds in my case -- but I'm working with many GB of RAM and 800MB of work_mem.

I'm not an expert on the query planner guts -- the below is all knowledge based on experimenting with queries on my dataset and triggering various query plans.  Due to the differences in semantics of these two queries the plan on larger tables will be two sorts and a merge join for the INNER JOIN on most columns, though indexes and especially unique indexes will change this.  This holds true even if the number of distinct values of the column being joined on is very low.

The WHERE id IN version will produce a much faster query plan most of the time, assuming you have enough work_mem configured.  The planner is conservative if it estimates usage of work_mem to overflow even a little bit -- and often shifts to sorts rather than hashes on disk.

Since you are working with such a small ammount of RAM, make sure you have some of it doled out to work_mem and tune the balance between work_mem, the OS, and the shared buffers carefully.  The conservative, sort-if-uncertain nature of the query planner may need some coersion with an unusual environment such as yours. You may even have faster results with a hash overflown to disk than a sort overflown to disk with that little memory if the % of overflow is small enough and the OS disk cache large enough. Plus, virtual machines sometimes do some odd things with caching non sync disk writes that may distort the usual random versus sequential disk cost for small I/O volumes.  Though my VM experience is VMWare not Xen.
The querry planner won't generally go for hashes on disk on purpose however, so you might need to be creative with manual statistics setting or changing the optimizer cost settings to experiment with various query plans and measure the unique aspects of your atypical environment and your data. 

On Wed, Jul 30, 2008 at 3:11 PM, Miernik <public@public.miernik.name> wrote:
AFAIK, provided bar is UNIQUE in table2 (e.g. is a PRIMARY KEY) the two
queries will give the same result:

SELECT foo, id FROM table1 WHERE id IN (SELECT id FROM table2);

SELECT foo, id FROM table1 INNER JOIN table2 USING (id);

Given table1 has about 100k rows, and table2 about 100 rows, which one
should be faster, less resource intensive, use less RAM, disk access, etc?
Are there any other even better ways to acomlish the same query?

Using 8.3.3 on a 48 MB RAM Xen.

--
Miernik
http://miernik.name/


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] boolean short-circuiting in plpgsql

Kev <kevinjamesfield@gmail.com> writes:
> ...because the case should force it to only evaluate 'old' when TG_OP
> = 'UPDATE' and otherwise ('INSERT') skip through to 't'. But this
> causes the same error on insert. I suspect it's because the select
> query gets parameterized and at that point the 'old' is missing,
> before the case even gets to be parsed.

Got it in one.

> How do I get around this
> without having two 'perform' statements?

What you need is two nested IF statements. The PERFORM in your example
is not relevant to the problem.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] how to fix problem then when two queries run at the same time, it takes longer to complete then if run in sequence

Richard Huxton <dev@archonet.com> wrote:
> Firstly, congratulations on providing quite a large database on such a
> limited system. I think most people on such plans have tables with a
> few hundred to a thousand rows in them, not a million. Many of the
> people here are used to budgets a hundred or a thousand times of
> yours, so bear in mind you're as much an expert as them :-)

Well, I proved that it can reasonably well work, and I am finetuning the
system step by step, so it can work better.

> If you're going to get the most out of this, you'll want to set up
> your own Xen virtual machine on a local system so you can test
> changes.

Good idea.

> If you know other small organisations locally in a similar position
> perhaps consider sharing a physical machine and managing Xen
> yourselves - that can be cheaper.

Well, maybe, but its also a lot of hassle, not sure it's worth it, just
looking to get the most out of thje existing system.

> First step is to make sure you're running version 8.3 - there are some
> useful improvements there that reduce the size of shorter text fields,
> as well as the synchronised scans Albert mentions below.

I am running 8.3.3

> Second step is to make turn off any other processes you don't need.
> Tune down the number of consoles, apache processes, mail processes
> etc. Normally not worth the trouble, but getting another couple of MB
> is worthwhile in your case.

There is no apache, but lighttpd, right now:

root@polica:~# free
total used free shared buffers cached
Mem: 49344 47840 1504 0 4 23924
-/+ buffers/cache: 23912 25432
Swap: 257000 9028 247972
root@polica:~#

> Might be worth turning off autovacuum and running a manual vacuum full
> overnight if your database is mostly reads.

I run autovacum, and the database has a lot of updates all the time,
also TRUNCATING tables and refilling them, usually one or two
INSERTS/UPDATES per second.

> Finally, I think it's worth looking at pgpool or pgbouncer (as Alvaro
> said) and set them to allow only one connection in the pool. I know
> that pgbouncer offers per-transaction connection sharing which will
> make this more practical. Even so, it will help if your application
> can co-operate by closing the connection as soon as possible.

I just installed pgpool2 and whoaaa! Everything its like about 3 times
faster! My application are bash scripts using psql -c "UPDATE ...".
I plan to rewrite it in Python, not sure if it would improve
performance, but will at least be a "cleaner" implementation.

In /etc/pgpool.conf I used:

# number of pre-forked child process
num_init_children = 1

# Number of connection pools allowed for a child process
max_pool = 1

Wanted to install pgbouncer, but it is broken currently in Debian. And
why is it in contrib and not in main (speaking of Debian location)?

--
Miernik
http://miernik.name/


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] window function v03 against HEAD

2008/7/31 David Fetter <david@fetter.org>:
> On Thu, Jul 31, 2008 at 02:12:26PM +0900, Hitoshi Harada wrote:
>> 2008/7/29 David Fetter <david@fetter.org>:
>> > On Mon, Jul 28, 2008 at 07:25:55PM +0900, H.Harada wrote:
>> >> I happily announce that the first design of window function was
>> >> finished and the patch against HEAD is released online. See
>> >> http://umitanuki.net/pgsql/wfv03/design.html
>> >
>> > I've put up a git repository at
>> > <http://git.postgresql.org/?p=~davidfetter/window_functions/.git;a=summary>
>>
>> Thanks a lot.
>> I have tried to get clone from the URL but it didn't work.
>>
>> $ git-clone http://git.postgresql.org/git/~davidfetter/window_functions/.git
>> Initialized empty Git repository in /home/forcia/repo/window_functions/.git/
>> Getting alternates list for
>> http://git.postgresql.org/git/~davidfetter/window_functions/.git
>> Getting pack list for
>> http://git.postgresql.org/git/~davidfetter/window_functions/.git
>> Getting index for pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
>> Getting pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
>> which contains c13ba377a6e58a05b5d9e39e36674af42126d48d
>>
>> here it stops and tells nothing. This occurs on both Linux and
>> Windows clients.
>
> How long does it hang for?
>

Sorry, finally I got it. It took about an hour...

--
Hitoshi Harada

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Shared object "libpq.so.3" not found

marko <marko.online@gmail.com> writes:
> I'm running FreeBSD 6.2 and I have Postgresql 8.2.4 installed with
> Perl v5.8.8. I'm trying to test DBD-Pg-2.8.7 after compilation and I
> get this error after 'make test':
> # Error: Can't load '/usr/home/markuser/DBD-Pg-2.8.7/blib/arch/
> auto/DBD/Pg/Pg.so' for module DBD::Pg: Shared object "libpq.so.3" not
> found, required by "Pg.so" at /usr/local/lib/perl5/5.8.8/mach/
> DynaLoader.pm line 230.

libpq.so.3 corresponds to the libpq version that was shipped in
PG release series 7.3.x and 7.4.x. 8.2 provides libpq.so.5.

Your subsequent comments make it pretty clear that you've got
(at least portions of) both 7.x and 8.x PG installations on your
machine. I'd suggest flushing all traces of the older one and
then rebuilding DBD::Pg from a clean start. Somehow it's been
seizing on the older PG installation as the one to link to...

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] window function v03 against HEAD

On Thu, Jul 31, 2008 at 02:12:26PM +0900, Hitoshi Harada wrote:
> 2008/7/29 David Fetter <david@fetter.org>:
> > On Mon, Jul 28, 2008 at 07:25:55PM +0900, H.Harada wrote:
> >> I happily announce that the first design of window function was
> >> finished and the patch against HEAD is released online. See
> >> http://umitanuki.net/pgsql/wfv03/design.html
> >
> > I've put up a git repository at
> > <http://git.postgresql.org/?p=~davidfetter/window_functions/.git;a=summary>
>
> Thanks a lot.
> I have tried to get clone from the URL but it didn't work.
>
> $ git-clone http://git.postgresql.org/git/~davidfetter/window_functions/.git
> Initialized empty Git repository in /home/forcia/repo/window_functions/.git/
> Getting alternates list for
> http://git.postgresql.org/git/~davidfetter/window_functions/.git
> Getting pack list for
> http://git.postgresql.org/git/~davidfetter/window_functions/.git
> Getting index for pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
> Getting pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
> which contains c13ba377a6e58a05b5d9e39e36674af42126d48d
>
> here it stops and tells nothing. This occurs on both Linux and
> Windows clients.

How long does it hang for?

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] window function v03 against HEAD

2008/7/29 David Fetter <david@fetter.org>:
> On Mon, Jul 28, 2008 at 07:25:55PM +0900, H.Harada wrote:
>> I happily announce that the first design of window function was
>> finished and the patch against HEAD is released online. See
>> http://umitanuki.net/pgsql/wfv03/design.html
>
> I've put up a git repository at
> <http://git.postgresql.org/?p=~davidfetter/window_functions/.git;a=summary>
>

Thanks a lot.
I have tried to get clone from the URL but it didn't work.

$ git-clone http://git.postgresql.org/git/~davidfetter/window_functions/.git
Initialized empty Git repository in /home/forcia/repo/window_functions/.git/
Getting alternates list for
http://git.postgresql.org/git/~davidfetter/window_functions/.git
Getting pack list for
http://git.postgresql.org/git/~davidfetter/window_functions/.git
Getting index for pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
Getting pack 0c6bd5d85a8e0c5e0dee1a4e91b6c2691cb53555
which contains c13ba377a6e58a05b5d9e39e36674af42126d48d

here it stops and tells nothing. This occurs on both Linux and Windows clients.
I succeeded on getting git://git.postgresql.org/git/postgresql.git.

I am quite new to git so if you know something please point me out.

Regards,


--
Hitoshi Harada

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] opportunity for time on large itanium system

Hi all,

There is an opportunity to get remote access to a 16 CPU Itanium (or
possibly bigger) system at HP. If anyone is interested Bob Gobeille
at HP (cc'ed) will do what he can to get remote access. Maybe some
scalability work or something? :) We don't have many details at the
moment, but Bob would be happy to answer questions.

Regards,
Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] why "WHERE uid NOT IN" is so slow, and EXCEPT in the same situtation is fast?

Miernik <public@public.miernik.name> writes:
> On Wed, Jul 30, 2008 at 11:08:06PM -0400, Tom Lane wrote:
>> Hmm, what have you got work_mem set to? The first one would likely
>> have been a lot faster if it had hashed the subplan; which I'd have
>> thought would happen with only 80K rows in the subplan result,
>> except it didn't.

> work_mem = 1024kB

Try increasing that ... I don't recall the exact per-row overhead
but I'm quite sure it's more than 8 bytes. Ten times that would
likely get you to a hash subquery plan.

> The machine has 48 MB total RAM and is a Xen host.

48MB is really not a sane amount of memory to run a modern database
in. Maybe you could make it go with sqlite or some other tiny-footprint
DBMS, but Postgres isn't focused on that case.

>> The queries are in fact not exactly equivalent, because EXCEPT
>> involves some duplicate-elimination behavior that won't happen
>> in the NOT IN formulation. So I don't apologize for your having
>> gotten different plans.

> But if use EXCEPT ALL?

Fraid not, EXCEPT ALL has yet other rules for how it deals with
duplicates.

>> Another issue is that the NOT IN will probably not do what you
>> expected if the subquery yields any NULLs.

> In this specific query I think it is not possible for the subquery to
> have NULLs,

Okay, just wanted to point out a common gotcha.

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgsql-es-ayuda] Nuevas presentaciones en la wiki

--- El mié 30-jul-08, Jaime Casanova <jcasanov@systemguards.com.ec> escribió:

> De: Jaime Casanova <jcasanov@systemguards.com.ec>
> Asunto: Re: [pgsql-es-ayuda] Nuevas presentaciones en la wiki
> A: "Julio Cesar Rodriguez Dominguez" <jurasec@gmail.com>
> Cc: "POSTGRES" <pgsql-es-ayuda@postgresql.org>
> Fecha: miércoles, 30 julio, 2008, 1:27 am
> On Tue, Jul 29, 2008 at 3:31 PM, Julio Cesar Rodriguez
> Dominguez
> <jurasec@gmail.com> wrote:
> > Algo anda mal con los pdf's, no los pude abrir.
> >
>
> yo las abri (desde la wiki claro), en un ubuntu y en *cof*
> windows *cof*
>
Muy buenas, en mi Windows se ve genial y en *cof* fedora *cof* tambien.

Me gustan, despues te voy hacer preguntas de cosas que no entiendo de la primera, pero cuando haga las pruebas de tunning y trabaje bastante, asi por lo menos sabes en que duda alguien que sea novato en el tunning

Atte.
Gabriel Colina


____________________________________________________________________________________
Yahoo! MTV Blog & Rock &gt;¡Cuéntanos tu historia, inspira una canción y gánate un viaje a los Premios MTV! Participa aquí http://mtvla.yahoo.com/
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [PERFORM] why "WHERE uid NOT IN" is so slow, and EXCEPT in the same situtation is fast?

On Wed, Jul 30, 2008 at 11:08:06PM -0400, Tom Lane wrote:
> Hmm, what have you got work_mem set to? The first one would likely
> have been a lot faster if it had hashed the subplan; which I'd have
> thought would happen with only 80K rows in the subplan result,
> except it didn't.

work_mem = 1024kB

The machine has 48 MB total RAM and is a Xen host.

> The queries are in fact not exactly equivalent, because EXCEPT
> involves some duplicate-elimination behavior that won't happen
> in the NOT IN formulation. So I don't apologize for your having
> gotten different plans.

But if use EXCEPT ALL?

> Another issue is that the NOT IN will probably not do what you
> expected if the subquery yields any NULLs.

In this specific query I think it is not possible for the subquery to
have NULLs, because its an INNER JOIN USING (the_only_column_in_the
_result, some_other_column_also). If any "uid" column of any row would
have been NULL, it wouldn't appear in that INNER JOIN, no?

--
Miernik
http://miernik.name/

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] why "WHERE uid NOT IN" is so slow, and EXCEPT in the same situtation is fast?

Miernik <public@public.miernik.name> writes:
> Two queries which do the same thing, first one takes ages to complete
> (did wait several minutes and cancelled it), while the second one took
> 9 seconds? Don't they do the same thing?

Hmm, what have you got work_mem set to? The first one would likely
have been a lot faster if it had hashed the subplan; which I'd have
thought would happen with only 80K rows in the subplan result,
except it didn't.

The queries are in fact not exactly equivalent, because EXCEPT
involves some duplicate-elimination behavior that won't happen
in the NOT IN formulation. So I don't apologize for your having
gotten different plans. But you should have gotten a plan less
awful than that one for the NOT IN.

Another issue is that the NOT IN will probably not do what you
expected if the subquery yields any NULLs.

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Plans for 8.4

"Henry B. Hotz" <hbhotz@oxy.edu> writes:
> What's the time frame for 8.4?

http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Development_Plan

Executive summary: new-feature patches had better be submitted before
1 November.

> I'm making no promises, but what would people think of a hostgss hba
> option?

Not qualified to comment on the usefulness of this ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Shared object "libpq.so.3" not found

I'm running FreeBSD 6.2 and I have Postgresql 8.2.4 installed with
Perl v5.8.8. I'm trying to test DBD-Pg-2.8.7 after compilation and I
get this error after 'make test':

PGINITDB="/usr/local/pgsql/bin/initdb" PERL_DL_NONLAZY=1 /usr/bin/perl
"-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/
arch')" t/*.t
t/00-signature......skipped
all skipped: Set the environment variable TEST_SIGNATURE to
enable this test
t/00basic...........ok
1/3
# Failed test 'use DBD::Pg;'
t/00basic...........NOK 2# in t/00basic.t at line
14.
# Tried to use 'DBD::Pg'.
# Error: Can't load '/usr/home/markuser/DBD-Pg-2.8.7/blib/arch/
auto/DBD/Pg/Pg.so' for module DBD::Pg: Shared object "libpq.so.3" not
found, required by "Pg.so" at /usr/local/lib/perl5/5.8.8/mach/
DynaLoader.pm line 230.
# at (eval 8) line 2
# Compilation failed in require at (eval 8) line 2.
# BEGIN failed--compilation aborted at t/00basic.t line 14.
FAILED--Further testing stopped: Cannot continue without DBD::Pg
*** Error code 2

Stop in /usr/home/markuser/DBD-Pg-2.8.7.
-------------------------------------------------------------------------
At first I was getting 'Undefined symbol "PQserverVersion"' error
during 'make test' and it complained about not being able to find
libpq.so.5 I believe. I fixed that by putting '/usr/local/pgsql/lib'
in ld.so.conf and running ldconfig so that it could find that file.
Then I also included '/usr/local/lib' in that file, which is the path
to libpq.so.3, but its acting like it still can't find it, as you see
above. It seems that I can't prioritize between searching between
these two directories. It varies between errors for not finding
libpq.so.5 or libpq.so.3. I can't seem to provide both library files
at the same time. Using the LD_LIBRARY_PATH environmental variable
doesn't seem to help this issue either. I've included both library
directories (separately by a colon, if that's correct) and its no
help.

Now, of course, I've googled this and nothing is helping at this
point. Does anyone have any other pointers? Is this a FreeBSD thing
possibly? I'd *much* appreciate some help here!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] npgsql - Npgsql2: Updates to SQL generation.

Log Message:
-----------
Updates to SQL generation. Now seem to pass all tests that don't require lateral join (CROSS APPLY in Sql Server)

Modified Files:
--------------
Npgsql2/src/Npgsql/SqlGenerators:
SqlSelectGenerator.cs (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlSelectGenerator.cs.diff?r1=1.6&r2=1.7)
SqlUpdateGenerator.cs (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlUpdateGenerator.cs.diff?r1=1.4&r2=1.5)
SqlDeleteGenerator.cs (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlDeleteGenerator.cs.diff?r1=1.4&r2=1.5)
SqlInsertGenerator.cs (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlInsertGenerator.cs.diff?r1=1.6&r2=1.7)
SqlBaseGenerator.cs (r1.11 -> r1.12)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlBaseGenerator.cs.diff?r1=1.11&r2=1.12)
VisitedExpression.cs (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/VisitedExpression.cs.diff?r1=1.6&r2=1.7)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[pgsql-es-ayuda] como indicar que los log queden con ip maquina

Estimados amigo, como puedo hacer para que los log, junto con la fecha –hora-descripción, además entreguen la maquina que presento el problema

 

Sin más que decir se despide de usted muy atentamente..

 

Cesar Erices Vergara

Ingeniero en Gestión Informática

Analista de Sistemas

 

Re: [HACKERS] Plans for 8.4

* Henry B. Hotz (hbhotz@oxy.edu) wrote:
> I'm making no promises, but what would people think of a hostgss hba
> option?

As described, sounds like a win to me. It'd be very nice to be able to
just use GSSAPI encryption on the link. That, combined w/ Magnus' work
on username/princ mappings, would really bring PostgreSQL up to date wrt
GSSAPI support.

It'd really be great to have this support in the ODBC and JDBC drivers
too.. I think in JDBC it might 'just work', I'm less sure about ODBC.

As a practical question- would you really need a seperate explicit
pg_hba option for it? It'd be nice to be able to require it, if
desired, but that strikes me as more sensible as an option to the 'gss'
auth mechanism?

Thanks!

Stephen

[PERFORM] why "WHERE uid NOT IN" is so slow, and EXCEPT in the same situtation is fast?

Two queries which do the same thing, first one takes ages to complete
(did wait several minutes and cancelled it), while the second one took
9 seconds? Don't they do the same thing?

miernik=> EXPLAIN SELECT uid FROM locks WHERE uid NOT IN (SELECT uid FROM locks INNER JOIN wys USING (uid, login));
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on locks (cost=38341.39..61365389.71 rows=48446 width=4)
Filter: (NOT (subplan))
SubPlan
-> Materialize (cost=38341.39..39408.47 rows=79508 width=4)
-> Hash Join (cost=3997.27..37989.89 rows=79508 width=4)
Hash Cond: (((wys.uid)::integer = (locks.uid)::integer) AND ((wys.login)::text = (locks.login)::text))
-> Seq Scan on wys (cost=0.00..13866.51 rows=633451 width=16)
-> Hash (cost=2069.91..2069.91 rows=96891 width=16)
-> Seq Scan on locks (cost=0.00..2069.91 rows=96891 width=16)
(9 rows)

Time: 231,634 ms
miernik=> EXPLAIN SELECT uid FROM locks EXCEPT (SELECT uid FROM locks INNER JOIN wys USING (uid, login));
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SetOp Except (cost=59306.12..60188.11 rows=17640 width=4)
-> Sort (cost=59306.12..59747.12 rows=176399 width=4)
Sort Key: "*SELECT* 1".uid
-> Append (cost=0.00..41823.79 rows=176399 width=4)
-> Subquery Scan "*SELECT* 1" (cost=0.00..3038.82 rows=96891 width=4)
-> Seq Scan on locks (cost=0.00..2069.91 rows=96891 width=4)
-> Subquery Scan "*SELECT* 2" (cost=3997.27..38784.97 rows=79508 width=4)
-> Hash Join (cost=3997.27..37989.89 rows=79508 width=4)
Hash Cond: (((wys.uid)::integer = (locks.uid)::integer) AND ((wys.login)::text = (locks.login)::text))
-> Seq Scan on wys (cost=0.00..13866.51 rows=633451 width=16)
-> Hash (cost=2069.91..2069.91 rows=96891 width=16)
-> Seq Scan on locks (cost=0.00..2069.91 rows=96891 width=16)
(12 rows)

Time: 1479,238 ms
miernik=>

--
Miernik
http://miernik.name/


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] archive_timeout, checkpoint_timeout

I was referring to this post:
http://archives.postgresql.org/pgsql-hackers/2007-10/msg01361.php

The test database was completely idle. WAL files were only being
archived at the interval specified in checkpoint_timeout (I was using
the default value) -- archive_timeout didn't make them happen any faster.

Upon retesting, archive_timeout is working properly when there are
regular updates happening to the database.

Thanks for your help!
--Rob Adams


Tom Lane wrote:
> Rob Adams <robfadams@cox.net> writes:
>> archive_timeout only seems to work if it's >= checkpoint_timeout.
>
> Hmm, no, they should be pretty independent. Define "seems to work"
> please?
>
> One possible connection is that an xlog file switch will not actually
> happen unless some xlog output has been generated since the last switch.
> If you were watching an otherwise-idle system then maybe the checkpoint
> records are needed to make it look like a switch is needed. OTOH if
> it's *that* idle then the checkpoints should be no-ops too. So we
> need a bit more context to understand what's happening. How often
> do real updates happen on your database?
>
> regards, tom lane
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Plans for 8.4

What's the time frame for 8.4?

I'm making no promises, but what would people think of a hostgss hba
option?

Using it would imply the gssapi/sspi authentication option. It would
be mutually exclusive of the ssl link-encryption option. It would
support strong encryption of the whole connection without the need to
get X509 certs deployed (which would be a big win if you're using
gssapi/sspi authentication anyway).

The thing that prevented me from including it in the gssapi patches I
did for 8.3 was that I couldn't disentangle the program logic to the
point of inserting the gssapi security layer code above the SSL code
and below everything else. I'm thinking that doing both is pretty
much an edge case, so I propose to do gssapi security layers instead
of SSL. The mods are a lot more obvious.

I'm *NOT* proposing to make build support of gssapi security layers
exclusive of SSL. You might, for example, configure a server to
support username/password over SSL for intra-net addresses, but
support gssapi for Internet addresses.

------------------------------------------------------
The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
Henry.B.Hotz@jpl.nasa.gov, or hbhotz@oxy.edu


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] pgAdmin III

I've installed PostgreSql 8.3 on Ubuntu 8.04.  When I click on the pgAdmin III link, nothing happens.  I'm a newbie here, any help would be appreciated...

- Jim

Re: [GENERAL] archive_timeout, checkpoint_timeout

Rob Adams <robfadams@cox.net> writes:
> archive_timeout only seems to work if it's >= checkpoint_timeout.

Hmm, no, they should be pretty independent. Define "seems to work"
please?

One possible connection is that an xlog file switch will not actually
happen unless some xlog output has been generated since the last switch.
If you were watching an otherwise-idle system then maybe the checkpoint
records are needed to make it look like a switch is needed. OTOH if
it's *that* idle then the checkpoints should be no-ops too. So we
need a bit more context to understand what's happening. How often
do real updates happen on your database?

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Connecting to an existing transaction state.

Alvaro Herrera wrote:
> Alex Gen wrote:
>> Hello,
>>
>> I'm in the process of creating a set of scripts for testing certain locking features in an application.
>> What I would like to do:
>> 1. Start a connection from machine-01 through the m01-s1.sql script.
>> 2.While (1) is running, start another transaction on the same database from machine-02 using m02-s1.sql.
>>
>> At this point in time, there are two open transactions on certain tables in the same database.
>>
>> 3. Using m01-s2.sql I would like to execute a certain SQL statement – BUT within the scope of the transaction begun by m01-s1.sql.
>> 4. Current situation: Since there are several .sql scripts, each getting its own connection and executing sql stmts – they are not aware of activities of the other scripts (i.e. the open transactions).
>> 5. What I'd like to do: After a transaction has been started from a machine, I should be able to save the transaction reference (id?) temporarily somewhere.
>> 6. The next statement (new .sql file) that wishes to execute within the scope of the above transaction – should be able to get the transaction reference (id) and latch onto it in its current state. This way it continues to perform as part of a whole – rather than only executing the statements that it had.
>
> You cannot do this directly by simply using SQL files being fed to psql.
> What you can do is open one psql process reading from a named pipe, and
> write SQL commands to this pipe.

Yep. You can also write a file that sources the others, and use psql to
load that. Eg:

BEGIN;
\i m01-s1.sql
\i m01-s2.sql
COMMIT;

Another option is to write a wrapper program using one of the scripting
languages with PostgreSQL interfaces (Python, Perl, etc) that
establishes a connection then loads a sequence of snippets and sends
them. That's a little more flexible, but not much more complicated.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [BUGS] BUG #4335: Error w/ PostgreSQL & EnterpriseDB Stack Builder

Mary Kealey wrote:
> The following bug has been logged online:
>
> Bug reference: 4335
> Logged by: Mary Kealey
> Email address: mary.t.freesmeyer@jpmchase.com
> PostgreSQL version: 8.3
> Operating system: Windows XP
> Description: Error w/ PostgreSQL & EnterpriseDB Stack Builder
> Details:
>
> I launched PostgreSQL & EnterpriseDB Stack Builder and get the error "Failed
> to open the application list: http://www.postgresql.org/applications.xml
> Error: The URL specified could not be opened.

Do you have a software firewall? If so, is it configured to permit the
stack builder to access the Internet?

Are you behind a proxy server that might limit access to some files?

Did you do a tcpdump or use wireshark to record the network
communication between client and server? If not, consider doing so.

--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[GENERAL] archive_timeout, checkpoint_timeout

We want to use an archive_timeout of ~1min to minimize data loss in the
event of hardware failure (archive script includes external b/u).

archive_timeout only seems to work if it's >= checkpoint_timeout.

Could someone please explain in layman's terms the implications of using
a checkpoint_timeout of ~1min as well? Is it a bad idea?

We use PostgreSQL 8.3 on Windows.

Thanks,
Rob Adams

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Difference between "Explain analyze" and "\timing"

Bill Moran <wmoran@collaborativefusion.com> writes:
> tarcizioab@c3sl.ufpr.br wrote:
>> The commands "\timing" and "EXPLAIN ANALYZE" return values related to the
>> time of execution of the instruction. These values are "Time:" and "Total
>> runtime:" respectively. What is the difference between these values, and
>> the specific use of each command in queries?

> The time reported by explain will be the time it took the server to
> execute the query.

> The time reported by \timing is the time it takes the client (psql) to
> get the result. This will be the time the server took, plus any network
> delays and time required to process the query and result on the client
> side.

Also realize that explain analyze only reports the time to *execute*
the query. Not counted are the time to parse and plan the query
beforehand and prepare the explain output afterwards. But psql's number
is the whole round-trip time and so includes all that stuff.

If any of this seems confusing, you might find it worth reading
this overview of the Postgres system structure:
http://www.postgresql.org/docs/8.3/static/overview.html

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Difference between "Explain analyze" and "\timing"

tarcizioab@c3sl.ufpr.br wrote:
>
> Hello friends
>
> The commands "\timing" and "EXPLAIN ANALYZE" return values related to the
> time of execution of the instruction. These values are "Time:" and "Total
> runtime:" respectively. What is the difference between these values, and
> the specific use of each command in queries?

The time reported by explain will be the time it took the server to
execute the query.

The time reported by \timing is the time it takes the client (psql) to
get the result. This will be the time the server took, plus any network
delays and time required to process the query and result on the client
side.

--
Bill Moran
Collaborative Fusion Inc.

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023

****************************************************************
IMPORTANT: This message contains confidential information
and is intended only for the individual named. If the reader of
this message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.
****************************************************************

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [NOVICE] How do I specify an interval in a function?

"Rob Richardson" <Rob.Richardson@rad-con.com> writes:
> In the help file under date and time functions, I see that intervals can be specified as "interval '3 hours' ". In a PgAdmin SQL window, I can enter "select interval '3 hours' ", and it will return me "03:00:00", as expected. I can also enter "select '3 hours'::interval", and get the same result. Yet neither syntax works inside a function.

> declare
> ThreeHours interval;
> begin
> ThreeHours = interval '3 hours'; -- throws a syntax error
> ThreeHours = '3 hours'::interval; -- also throws a syntax error
> end;

Either of those should work. I think your problem is that you're not
quoting the whole function body correctly. Remember that the function
body is itself a string constant. So if you were to try to write this
as a single-quoted string, you'd need to double those embedded quotes:

CREATE FUNCTION ... AS '
declare
ThreeHours interval;
begin
ThreeHours = interval ''3 hours''; -- throws a syntax error
ThreeHours = ''3 hours''::interval; -- also throws a syntax error
end;
' LANGUAGE plpgsql;

In any reasonably modern version of PG, there's a string constant syntax
called "dollar quoting", which was invented specifically to make this
less painful:

CREATE FUNCTION ... AS $$
declare
ThreeHours interval;
begin
ThreeHours = interval '3 hours'; -- throws a syntax error
ThreeHours = '3 hours'::interval; -- also throws a syntax error
end;
$$ LANGUAGE plpgsql;

If you need to use '$$' inside the function body, you could instead
use $func$ or something like that as the outer quoting boundaries.

(BTW, I would think that PgAdmin could handle these quoting details
for you, but I'm really not very familar with it. Are you editing
the function in a window that's specifically for function editing?
If you're just typing the CREATE FUNCTION command as-is in a command
window, then you'll have to deal with the nested-quoting issues for
yourself.)

regards, tom lane

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

[GENERAL] boolean short-circuiting in plpgsql

Hi everyone,

I may be missing something obvious, but it seems like the advice in
4.2.12 on http://www.postgresql.org/docs/8.3/interactive/sql-expressions.html
doesn't seem to apply in plpgsql.

I have a table that I want to use a trigger on when either a new row
is inserted or at least one of two particular columns is updated.
This fails on insert:

begin
if TG_OP = 'INSERT' or (new.sortnum != old.sortnum or new.parent !=
old.parent) then
perform recalc_sortnumpath(new.id);
end if;
return new;
end;

...because 'old' doesn't exist and the latter argument of the 'or'
gets evaluated despite the TG_OP being 'INSERT'. According to the
docs I should change that line to:

if (select case when TG_OP = 'UPDATE' then (new.sortnum != old.sortnum
or new.parent != old.parent) else 't' end) then

...because the case should force it to only evaluate 'old' when TG_OP
= 'UPDATE' and otherwise ('INSERT') skip through to 't'. But this
causes the same error on insert. I suspect it's because the select
query gets parameterized and at that point the 'old' is missing,
before the case even gets to be parsed. How do I get around this
without having two 'perform' statements? Is there no short-circuit
option in plpgsql?

Thanks,
Kev

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[BUGS] BUG #4336: Strange behaviour for to_ascii

The following bug has been logged online:

Bug reference: 4336
Logged by: Maxime Carbonneau
Email address: manitou@maikan.com
PostgreSQL version: 8.3.3
Operating system: Mac OS X 10.5.4
Description: Strange behaviour for to_ascii
Details:

With PostgreSQL 8.2.5, I used to_ascii to remove accents:
SELECT TO_ASCII(CONVERT('école' USING utf8_to_iso_8859_15), 'latin9');
=> ecole
With PostgreSQL 8.3.3, I had to made some changes since convert using is
gone. I'm using: SELECT
to_ascii(encode(convert_to('école','LATIN9'),'escape'),'LATIN9');
but the result is
=> \351cole

How can I remove accent letter with PostgreSQL 8.3.3?

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] Clone a database to other machine

Hi ,

Actually I was looking for some method / tool (utility) which keeps both
the database on different servers in sync automatically.

Looking for some features into postgres.conf file if possible.


Thanks and regards,

Manjit Garg


-----Original Message-----
From: Raymond O'Donnell [mailto:rod@iol.ie]
Sent: Wednesday, July 30, 2008 8:22 PM
To: Garg, Manjit
Cc: lists@stringsutils.com; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Clone a database to other machine

On 28/07/2008 18:32, Garg, Manjit wrote:

> But, actually I want to keep both the Databse in Sync. I want clone db

> to get the data from Master in certain intervals.
>
> DB dump size is 3 GB.

In that case, a cron job which dumps the data from the master and
reloads it on the other machine may be your best bet.

Ray.

------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie Galway Cathedral Recitals:
http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Must be table owner to truncate?

On mið, 2008-07-30 at 07:36 -0400, Kevin Hunter wrote:
> At 3:45p -0400 on Mon, 28 Jul 2008, Said Ramirez wrote:
> > According to the documentation,
> > http://www.postgresql.org/docs/current/interactive/sql-truncate.html ,
> > only the owner can truncate a table. Which means the non-owner must
> > either log in/ switch roles as the owner, or they can just run a DELETE.
>
> Well that's interesting. From a security standpoint, what's the
> difference between an unqualified DELETE and a TRUNCATE?

lack of triggers and RULEs spring to mind.

gnari

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Database size Vs performance degradation

-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Matthew
Wakeling
Sent: July 30, 2008 8:37 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size Vs performance degradation

On Wed, 30 Jul 2008, Dave North wrote:
> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1

> Checking the stats, the DB size is around 7.5GB;

Doesn't fit in RAM.

> ...after the load, the DB size was around 2.7GB

Does fit in RAM.

> One observation I've made on the DB system is the disk I/O seems
> dreadfully slow...we're at around 75% I/O wait sometimes and the read
> rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
> un-cached reads).

That's incredibly slow in this day and age, especially from 10krpm HDDs.

Definitely worth investigating.

DN: Yeah, I was thinking the same thing. Unlike the folks here, I'm no
performance whiz but it did seem crazy slow. Given the 10K disks, it
seems to me there is most likely something on the RAID Array itself that
is set sub-optimally. Next thing to look at.

However, I think vacuuming more agressively is going to be your best win
at the moment.

DN: As I just replied to the past (very helpful) chap, I think I need to
go see what exactly the vac is vac'ing (autovac that is) because
although it's running super frequently, the big question is "is it doing
anything" :)

Cheers

Dave

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Database size Vs performance degradation

-----Original Message-----
From: Richard Huxton [mailto:dev@archonet.com]
Sent: July 30, 2008 8:28 AM
To: Dave North
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size Vs performance degradation

Dave North wrote:
> Morning folks,
> Long time listener, first time poster.

Hi Dave

> Postgres 8.1.8
> shared_buffers = 2000
> max_fsm_pages = 400000
> Redhat Enterprise 4
> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1 Also running

> on the server is a tomcat web server and other ancillaries

The value of 2000 seems a bit low for shared_buffers perhaps. Oh, and
8.1.13 seems to be the latest bugfix for 8.1 too.

DN: Yeah, I was thinking the same. I spent several hours reading info
on this list and other places and it's highly inconclusive about having
high or low shared buffs Vs letting the OS disk cache handle it.

> Now, the problem. We have an application that continually writes a
> bunch of data to a few tables which is then deleted by a batch job
> each night. We're adding around 80,000 rows to one table per day and
> removing around 75,000 that are deemed to be "unimportant".
[snip]
> We had this problem around a month ago and again yesterday. Because
> the application needs reasonably high availability, we couldn't full
> vacuum so what we did was a dump and load to another system. What I
> found here was that after the load, the DB size was around 2.7GB - a
> decrease of 5GB. Re-loading this back onto the main system, and the
world is good.

Well, that's pretty much the definition of bloat. Are you sure you're
vacuuming enough?

DN: Well, the auto-vac is kicking off pretty darn frequently...around
once every 2 minutes. However, you just made me think of the obvious -
is it actually doing anything?! The app is pretty darn write intensive
so I wonder if it's actually able to vacuum the tables?

I don't have an 8.1 to hand at the moment, but a "vacuum verbose" in
8.2+ gives some details at the end about how many free-space slots need
to be tracked. Presumably you're not tracking enough of them, or your
vacuuming isn't actually taking place.

DN: I think you've hit it. Now the next obvious problem is how to make
the vac actually vac while maintaining a running system?

Check the size of your database every night. It will rise from 2.7GB,
but it should stay roughly static (apart from whatever data you add of
course). If you can keep it so that most of the working-set of your
database fits in RAM speed will stay just fine.

DN: Yep, I'm just implementing a size tracker now to keep a track on it.
It grew from the 2.5GB to 7GB in around a month so it's pretty easy to
see big jumps I'd say. Does the auto-vac log it's results somewhere by
any chance do you know?

Fantastic post, thanks so much.

Dave

> Yes, I know we need to upgrade to 8.3 but that's going to take some
> time
> :)

I think you'll like some of the improvements, but it's probably more
important to get 8.1.13 installed soon-ish.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Database size Vs performance degradation

On Wed, 30 Jul 2008, Dave North wrote:
> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1

> Checking the stats, the DB size is around 7.5GB;

Doesn't fit in RAM.

> ...after the load, the DB size was around 2.7GB

Does fit in RAM.

> One observation I've made on the DB system is the disk I/O seems
> dreadfully slow...we're at around 75% I/O wait sometimes and the read
> rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
> un-cached reads).

That's incredibly slow in this day and age, especially from 10krpm HDDs.
Definitely worth investigating.

However, I think vacuuming more agressively is going to be your best win
at the moment.

Matthew

--
Patron: "I am looking for a globe of the earth."
Librarian: "We have a table-top model over here."
Patron: "No, that's not good enough. Don't you have a life-size?"
Librarian: (pause) "Yes, but it's in use right now."

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Database size Vs performance degradation

Dave North wrote:
> Morning folks,
> Long time listener, first time poster.

Hi Dave

> Postgres 8.1.8
> shared_buffers = 2000
> max_fsm_pages = 400000
> Redhat Enterprise 4
> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
> Also running on the server is a tomcat web server and other ancillaries

The value of 2000 seems a bit low for shared_buffers perhaps. Oh, and
8.1.13 seems to be the latest bugfix for 8.1 too.

> Now, the problem. We have an application that continually writes a
> bunch of data to a few tables which is then deleted by a batch job each
> night. We're adding around 80,000 rows to one table per day and
> removing around 75,000 that are deemed to be "unimportant".
[snip]
> We had this problem around a month ago and again yesterday. Because the
> application needs reasonably high availability, we couldn't full vacuum
> so what we did was a dump and load to another system. What I found here
> was that after the load, the DB size was around 2.7GB - a decrease of
> 5GB. Re-loading this back onto the main system, and the world is good.

Well, that's pretty much the definition of bloat. Are you sure you're
vacuuming enough? I don't have an 8.1 to hand at the moment, but a
"vacuum verbose" in 8.2+ gives some details at the end about how many
free-space slots need to be tracked. Presumably you're not tracking
enough of them, or your vacuuming isn't actually taking place.

Check the size of your database every night. It will rise from 2.7GB,
but it should stay roughly static (apart from whatever data you add of
course). If you can keep it so that most of the working-set of your
database fits in RAM speed will stay just fine.

> Yes, I know we need to upgrade to 8.3 but that's going to take some time
> :)

I think you'll like some of the improvements, but it's probably more
important to get 8.1.13 installed soon-ish.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[PERFORM] Database size Vs performance degradation

Morning folks,
Long time listener, first time poster. Having an interesting
problem related to performance which I'll try and describe below and
hopefully get some enlightenment. First the environment:


Postgres 8.1.8
shared_buffers = 2000
max_fsm_pages = 400000
Redhat Enterprise 4
Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
Also running on the server is a tomcat web server and other ancillaries

Now, the problem. We have an application that continually writes a
bunch of data to a few tables which is then deleted by a batch job each
night. We're adding around 80,000 rows to one table per day and
removing around 75,000 that are deemed to be "unimportant". Now, the
problem we see is that after a period of time, the database access
becomes very 'slow' and the load avg on the machine gets up around 5.
When this happens, the application using the DB basically grinds to a
halt. Checking the stats, the DB size is around 7.5GB; no tables or
indexes look to be 'bloated' (we have been using psql since 7.3 with the
classic index bloat problem) and the auto-vac has been running solidly.

We had this problem around a month ago and again yesterday. Because the
application needs reasonably high availability, we couldn't full vacuum
so what we did was a dump and load to another system. What I found here
was that after the load, the DB size was around 2.7GB - a decrease of
5GB. Re-loading this back onto the main system, and the world is good.

One observation I've made on the DB system is the disk I/O seems
dreadfully slow...we're at around 75% I/O wait sometimes and the read
rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
un-cached reads). I've also observed that the OS cache seems to be
using all of the remaining memory for it's cache (around 3GB) which
seems probably the best it can do with the available memory.

Now, clearly we need to examine the need for the application to write
and remove so much data but my main question is:

Why does the size of the database with so much "un-used" space seem to
impact performance so much? If (in this case) the extra 5GB of space is
essentially "unallocated", does it factor into any of the caching or
performance metrics that the DBMS uses? And if so, would I be better
having a higher shared_buffers rather than relying so much on OS cache?

Yes, I know we need to upgrade to 8.3 but that's going to take some time
:)

Many thanks in advance.

Dave

___
Dave North
dnorth@signiant.com
Signiant - Making Media Move
Visit Signiant at: www.signiant.com <http://www.signiant.com/>


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] Must be table owner to truncate?

At 3:45p -0400 on Mon, 28 Jul 2008, Said Ramirez wrote:
> According to the documentation,
> http://www.postgresql.org/docs/current/interactive/sql-truncate.html ,
> only the owner can truncate a table. Which means the non-owner must
> either log in/ switch roles as the owner, or they can just run a DELETE.

Well that's interesting. From a security standpoint, what's the
difference between an unqualified DELETE and a TRUNCATE?

Also interesting to note that TRUNCATE is transaction safe, but not MVCC
safe. Good to know, good to know ...

Kevin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [NOVICE] How to copy a schema under another name in same database

Never had a need for this. One thought that comes to mind write a
plpgsql function that takes in name of new schema and old schema and
does something like below

--Create new tables in new schema

FOR tbltocopy IN(SELECT table_name
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND
table_schema = param_source_schema ORDER BY table_name)

EXECUTE('CREATE TABLE ' || param_dest_schema || '.' || tbltocopy
|| '( LIKE ' || param_source_schema || '.' || tbltocopy || ' INCLUDING
DEFAULT CONSTRAINTS INDEXES ');
INSERT INTO ' || param_dest_schema || '.' || tbltocopy || ' SELECT *
FROM ' param_source_schema || '.' || tbltocopy || ';');

NEXT;


Hope that helps,
Regina



-----Original Message-----
From: pgsql-novice-owner@postgresql.org on behalf of Daniel Gour
Sent: Tue 7/29/2008 9:20 AM
To: pgsql-novice@postgresql.org
Subject: [NOVICE] How to copy a schema under another name in same
database

Hello! I have a PostgreSQL (8.3.3, WinXP + linux) database with
multiple
schemas. I would like, on a regular basis, to be able to copy the
structure
and data of one schema under a new schema, using of course a different
name.
What would be the easiest way?



- I have seen the TODO entry "Add CREATE SCHEMA ... LIKE that
copies
a schema", so I guess an easy solution (a single pgsql command) is not
yet
available...



- I guess the alternative is to use pg_dump to dump a single
schema
and use pg_restore, but how can I restore the dumped information under
another schema? I would like to avoid dumping it as an sql script and
having
it to modify it manually, because this will be a regular operation that
I
would like to automate in my application.



Thanks in advance for any insight you can provide!

---------------------------------
Daniel Gour
Adacel Inc.






-----------------------------------------
The substance of this message, including any attachments, may be
confidential, legally privileged and/or exempt from disclosure
pursuant to Massachusetts law. It is intended
solely for the addressee. If you received this in error, please
contact the sender and delete the material from any computer.


--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [HACKERS] Type Categories for User-Defined Types

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> So this can fairly obviously be replaced by two new pg_type columns,
> say "typcategory" and "typpreferred", where the latter is a bool.
> Since the list of categories is pretty short and there's no obvious
> reason to extend it a lot, I propose that we just represent typcategory
> as a "char", using a mapping along the lines of
>
> BITSTRING_TYPE b
> BOOLEAN_TYPE B
> DATETIME_TYPE D
> GENERIC_TYPE P (think "pseudotype")
> GEOMETRIC_TYPE G
> INVALID_TYPE \0 (not allowed in catalog anyway)
> NETWORK_TYPE n
...

I had a different thought when you posted the original prompt earlier. Instead
of a static list of type categories we could re-use type oids. So if you
belong to a type category you store the oid of the preferred type of that
category in typcategory.

I can't help thinking from the list above that there's nothing special about
datetime, geometric, and network data types that some user defined set of
types wouldn't necessarily want to define.

I do agree that having SQL commands to create new type categories, even a new
catalog table is overkill, but not because we wouldn't want to create new
ones. Just because there isn't really any other meta data we want to store
about type categories. Aside from the preferred type and the members there
isn't anything more to say about them.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Protocol 3, Execute, maxrows to return, impact?

Stephen R. van den Berg wrote:
>A.M. wrote:
>>Anyway, what does "The driver beats libpq in speed by about 62%" mean?

>I.e. speed comparison is 762/483 = 1.63 -> 63% faster.

After some tuning on the libpq side, I can say that:

If libpq is called using a cursor and fetches in 64 row chunks the speed
difference with my driver above is correct.
If libpq is called *not* using a cursor and fetches all (1000) rows in one
go, the new comparison becomes:

583/483 = 1.21, which still makes my driver 21% faster than libpq native.

Please note that my driver automatically chunks up the query using
multiple chained Execute(dynamically calculated limit) and allows
fetching the rows on demand and interleaving portals.
My driver uses a 32KByte buffer per active portal.
--
Sincerely,
Stephen R. van den Berg.

How many weeks are there in a lightyear?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-fr-generale] Petit conseil

Sébastien Lardière a écrit :
>
> Sous Windows peut-être, mais à ma connaissance, les noyaux comme Linux
> savent gérer correctement la mémoire au-dela de 4Go, même en 32bits.
>

Oui tu as tout à fait raison, avec cependant la réserve d'avoir un noyau
compilé avec le paramètre idoine.
Ca reste néanmoins une sorte de mapping si mes souvenirs sont bons, donc
à éviter si possible.
De toutes façons tous les processeurs actuels étant 64 bits il n'y a
plus guère de raison de faire tourner un serveur avec un noyau 32 bits.
(c'est autre chose pour une station de travail ok).

--
Christophe Garault

Re: [JDBC] numeric type

Peter написав(ла):
>>> Long story. We're migrating old Access app to Postgres but still need
>>> to be able to exchange datasets with the old app users (app supports
>>> it's own import/export in MDB format). Since migration is expected to
>>> last several years we need some sort of automated PG->MDB thing.
>>>
>>>
>> Why don't you just make you converter configurable on how it handles
>> decimal without specs?
>>
>
> I would need to hack Jackcess library in order to do that... besides it does
> not seem the proper way to do it, more like an ugly hack. getPrecision and
> getScale are supposed to return the true precision and scale after all...
>
> Peter
>
>
It may be easier to write a wrapper over JDBC driver doing needed
conversion. With Java Proxy it is not a complex task.
It may look much like the next:
public class DelegatingHandler<TP> implements InvocationHandler {
protected final TP parent;

public DelegatingHandler(TP parent) {
this.parent = parent;
}

public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
Method override =
this.getClass().getMethod(method.getName(), method.getParameterTypes());
if (override != null)
return override.invoke(this, args);
return method.invoke(parent, args);
}
}

public class ConnectionWrapper extends DelegatingHandler<Connection> {

public static Connection makeConnectionWrapper(Connection connection) {
return (Connection)
Proxy.newProxyInstance(ConnectionWrapper.class.getClassLoader(),
new Class[]{Connection.class}, new
ConnectionWrapper(connection));
}
...
public PreparedStatement prepareStatement(String sql) throws
SQLException {
return makePreparedStatementWrapper(parent.prepareStatement(sql));
}
...
}


You simply create a wrapper and "override" needed methods by making
methods in wrapper with exactly same name and signature, calling parent
instead of super when needed.

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [pgsql-fr-generale] Petit conseil

Christophe Garault a écrit :
> Guillaume Lelarge a écrit :
>> La question immédiate est pourquoi ? et en quoi l'OS joue dans ce
>> domaine ? et enfin, quel OS préconises-tu pour le coup du 64 bits ?
>>
> Et bien parce que les OS 32 bits gèrent très mal voire pas du tout la
> mémoire au delà de 4 Go (2^32). Sur un windows 32 bits c'est déjà la
> mort au delà de 3Go...
> J'ai un peu honte de l'avouer ici mais dans le domaine des télécoms où
> j'officie il y a hélas très peu de PostgreSQL; mon expérience se
> limite donc à des produits proprio. Mais que ce soit sur du Windows
> 2003-64 , Debian Etch ou sur du RHEL 64 je n'ai pas rencontré le
> moindre problème.
> A noter également que la version de l'OS (ou les paramètres de compil
> du kernel) est importante pour le nombre maximum de processeurs gérés.
>
Sous Windows peut-être, mais à ma connaissance, les noyaux comme Linux
savent gérer correctement la mémoire au-dela de 4Go, même en 32bits.

Il n'est de toute façon pas recommandé de mettre PostgreSQL en
production avec Windows. Sauf contrainte particulière, prenez un système
de type Unix.

--
Sébastien Lardière

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [JDBC] numeric type

>> Long story. We're migrating old Access app to Postgres but still need
>> to be able to exchange datasets with the old app users (app supports
>> it's own import/export in MDB format). Since migration is expected to
>> last several years we need some sort of automated PG->MDB thing.
>>
>Why don't you just make you converter configurable on how it handles
>decimal without specs?

I would need to hack Jackcess library in order to do that... besides it does
not seem the proper way to do it, more like an ugly hack. getPrecision and
getScale are supposed to return the true precision and scale after all...

Peter


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [pgsql-fr-generale] Petit conseil

Guillaume Lelarge a écrit :
> La question immédiate est pourquoi ? et en quoi l'OS joue dans ce
> domaine ? et enfin, quel OS préconises-tu pour le coup du 64 bits ?
>
Et bien parce que les OS 32 bits gèrent très mal voire pas du tout la
mémoire au delà de 4 Go (2^32). Sur un windows 32 bits c'est déjà la
mort au delà de 3Go...
J'ai un peu honte de l'avouer ici mais dans le domaine des télécoms où
j'officie il y a hélas très peu de PostgreSQL; mon expérience se limite
donc à des produits proprio. Mais que ce soit sur du Windows 2003-64 ,
Debian Etch ou sur du RHEL 64 je n'ai pas rencontré le moindre problème.
A noter également que la version de l'OS (ou les paramètres de compil du
kernel) est importante pour le nombre maximum de processeurs gérés.

--
Christophe Garault

Re: [BUGS] Segfault manifesting in libm from cost_sort

ECC memory, RAID 10 w/ adaptec 3405 hardware controller. Period between crashes ranged from about 1min-5mins. Just switched to a new box so the problem is "gone". In the original email I meant ubuntu 6.06.

On Wed, Jul 30, 2008 at 12:06 AM, John R Pierce <pierce@hogranch.com> wrote:
Andrew Badr wrote:
Our pg keeps going into recovery mode after segfaulting. This is a compiled 8.3.3 on Ubuntu 6.04 with Slony.

Some ideas from IRC:
RhodiumToad: 1) probably least likely, something corrupt in the math libs; the fact that it's not reproducible makes this improbable
RhodiumToad: 2) more likely: a register or memory stomp in a signal handler, which could be the result of an OS bug or a pg miscompile
RhodiumToad: 3) slightly less likely: a memory stomp somewhere else in pg that happens to be clobbering something in the math library

does this server have ECC memory?   if not

Pierce:  4) flakey memory


does this server have an 'enterprise' grade disk system (eg, SAS, SCSI, Fiberchannel, with a decent qualilty RAID controller)?    if its a desktop ATA/SATA disk...

Pierce: 5) flakey disk drive or channel



Re: [JDBC] numeric type

Hello Peter,

Peter schrieb:
>
>
>
> Long story. We're migrating old Access app to Postgres but still need
> to be able to exchange datasets with the old app users (app supports
> it's own import/export in MDB format). Since migration is expected to
> last several years we need some sort of automated PG->MDB thing…
>
Why don't you just make you converter configurable on how it handles
decimal without specs?


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [HACKERS] pg_regress inputdir

begin:vcard
fn;quoted-printable:J=C3=B8rgen Austvik
n;quoted-printable:Austvik;J=C3=B8rgen
org:Sun Microsystems;Database Group
adr:;;Haakon VII gt. 7b;Trondheim;;NO-7485;Norway
email;internet:jorgen.austvik@sun.com
title:Senior Engineer
tel;work:+47 73 84 21 10
tel;fax:+47 73 84 21 01
tel;cell:+47 901 97 886
note:http://www.austvik.net/
x-mozilla-html:FALSE

url:http://blogs.sun.com/austvik/
version:2.1
end:vcard

Alvaro Herrera wrote:
> Jorgen Austvik - Sun Norway wrote:
>
>> The attached patch makes pg_regress write converted files to
>> <inputdir>/sql and <inputdir>/expected, which is one way to make it read
>> and write to the same directory. Tested on Solaris x86 with pgsql "make
>> check" and standalone.
>
> I think this breaks VPATH builds in both letter and spirit.

Letter:

--------8<---------------8<---------------8<---------------8<---------------8<---------------8<-------
bash-3.2$ ggrep -R "\-\-inputdir" *
src/test/regress/pg_regress.c: printf(_(" --inputdir=DIR
take input files from DIR (default \".\")\n"));
Binary file src/test/regress/pg_regress.o matches
Binary file src/test/regress/pg_regress matches
Binary file
src/test/regress/tmp_check/install/usr/local/pgsql/lib/pgxs/src/test/regress/pg_regress
matches
--------8<---------------8<---------------8<---------------8<---------------8<---------------8<-------

Since it is not used in PostgreSQL build (only default value - "."), I
have problems seeing how writing to e.g. "./sql/file" instead of writing
to "sql/file" could break anything. Please explain.

Spirit:

Nobody has ever accuse me of being spiritual ;-), but if you have a URI
that explains the VPATH spirit, I'd be interested to read about it.

> Why do you need this anyway?

I tried to explain that in the first mail, but let me try again.

Use case:
Running pg_regress outside of PostgreSQL build system. pg_regress is
installed in e.g. /usr/postgres/8.3/bin/, "input", "output", "sql" and
"expected" are installed in some other path, e.g.
/usr/postgres/8.3/share/test. User is in ~ and tries to run the
PostgreSQL regression tests. It doesn't work, in fact the only way to
make it work is to cd to the parent directory of "sql" and "expected".

Today, using --inputdir in pg_regress does not work for any other value
than something that resolves to cwd, since it will write a file to
"./sql", but try to read the same file from "<inputdir>/sql".

-J
--

Jørgen Austvik, Software Engineering - QA
Sun Microsystems Database Group

http://blogs.sun.com/austvik/
http://www.austvik.net/

Re: [JDBC] numeric type

 

 

From: Daniel Migowski [mailto:dmigowski@ikoffice.de]
Sent: Wednesday, July 30, 2008 11:25 AM
To: Peter Zeltins
Cc: pgsql-jdbc@postgresql.org
Subject: Re: [JDBC] numeric type

 

 
I would assume this is not a major issue for most. Personally I ran into
this problem when writing automated PG->MS Access converter (Jackcess is
the only lib capable of creating MDB files on Linux/BSD) and it bailed
on numeric fields. 
  

Great gods, why would anyone do this in the first place? :) I once wrote one for the opposite direction, but please tell us, which requirements needs you to convert a postgres database to MS Access?


Long story. We’re migrating old Access app to Postgres but still need to be able to exchange datasets with the old app users (app supports it’s own import/export in MDB format). Since migration is expected to last several years we need some sort of automated PG->MDB thing…

 

Peter

 

Re: [pgsql-fr-generale] Petit conseil

Christophe Garault a écrit :
> Guillaume Lelarge a écrit :
>> Juste une dernière question sur une partie un peux plus "hardware"
>> 32 ou 64bits?
>>
>> À ma connaissance, aucune importance.
>>
>>
> Pas tout à fait quand même. Au delà de 4 Go de RAM il vaut mieux
> privilégier le 64 bits. Mais ceci dépend énormément de l'OS sous-jacent...

La question immédiate est pourquoi ? et en quoi l'OS joue dans ce
domaine ? et enfin, quel OS préconises-tu pour le coup du 64 bits ?


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

[HACKERS] printing raw parse tree

Very often I find it is useful to print raw parse trees for debugging
and/or understanding PostgreSQL internals and I personally modify
pg_parse_query() to accomplish it. If this is common among developers,
I would like to post small patches. Opinion?
--
Tatsuo Ishii
SRA OSS, Inc. Japan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] about postgres-r setup.

Markus Wanner 写道:
> Hello Leiyonghua,
>
> leiyonghua wrote:
>> at node2, and the psql command is hung up,
>>
>> NOTICE: Applying for membership in the communication group...
>> NOTICE: Successfully joined the replication group. Now recovering
>> schema...
>
> I've just tested the very same here with ensemble, and could reproduce
> the situation. It had to do with the ensemble receive buffer being too
> small to hold the recovery data. That's fixed in today's snapshot.
> With that I've been able to replicate simple tuples again.
>
that's cool ! i will try later.
> Regards
>
> Markus Wanner
>
>
>
leiyonghua

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-ru-general] Re: [pgsql-ru-general] Re: Онлайн-конференция с ведущими экспертами PostgreSQL

Начался предварительный приём вопросов для Q&A-сессии с ведущими
экспертами PostgreSQL. Начало основного времени: 15:00.

Участники конференции:

* Брюс Момджан (Bruce Momjian), координатор разработки PostgreSQL,
лидер сообщества PostgreSQL, эксперт компании EnterpriseDB.
* Максим Богук, ведущий администратор баз данных компаний Rambler и
Мастерхост, известный специалист по PostgreSQL.
* Фёдор Сигаев, разработчик подсистемы полнотекстового поиска, систем
индексации GiST, GIN и множества дополнительных модулей, один из
основных членов PostgreSQL Global Development Group.
* Олег Бартунов, один из основных членов PostgreSQL Global Development
Group, разработчик подсистемы полнотекстового поиска, систем
индексации GiST, GIN, разработчик многотерабайтных научных баз данных.
* Марко Крин (Marko Kreen), один из основных архитекторов баз данных
компании Skype, разработчик и мантейнер таких проектов как PL/Proxy,
Skytools, PgBouncer и pgcrypto.

Вопросы принимаются в jabber-канале postgresmen@conference.jabber.org
(основной способ) и в skype:postgresmen.

Если вы по каким-либо причинам не можете воспользоваться jabber или
skype, оставляйте вопросы в виде комментариев здесь:
http://postgresmen.ru/news/view/106.

Как задавать вопросы:
По возможности просим указывать ФИО, организацию и способ связи. Не
стоит задавать вопросы из разряда FAQ, они будут отфильтрованы.
Вопросы можно адресовать сразу всем экспертам или кому-то одному.
Основной язык вопросов: русский, для зарубежных гостей будет
организован перевод онлайн. Допускаются вопросы на английском.

Ждём ваших вопросов!

29 июля 2008 г. 15:23 пользователь Ivan Zolotukhin
<ivan.zolotukhin@gmail.com> написал:
> Добрый день,
>
> Хорошие новости: об участии в завтрашней онлайн-конференции заявил
> Марко Крин (Marko Kreen) -- один из основных архитекторов баз данных
> Skype, разработчик и мантейнер таких проектов как PL/Proxy, Skytools,
> PgBouncer (см. https://developer.skype.com/SkypeGarage/DbProjects) и
> pgcrypto. Марко сможет ответить на вопросы о горизонтальном
> партицировании баз данных, новых системах репликации, открытых
> PostgreSQL-проектах компании Skype и их будущем, о шифровании данных в
> PostgreSQL, о том как сделать базу данных на миллиард пользователей и
> многом-многом другом.
>
> --
> Regards,
> Ivan
>
>
> 2008/7/25 Ivan Zolotukhin <ivan.zolotukhin@gmail.com>:
>> Добрый день,
>>
>> 30 июля мы организуем онлайн-конференцию «Вопросы к ведущим экспертам
>> PostgreSQL». Любой желающий сможет задать свой вопрос, связанный с
>> внедрением, эксплуатацией, развитием PostgreSQL, всемирно известным
>> экспертам PostgreSQL, ведущим разработчикам и лидерам сообщества
>> PostgreSQL.
>>
>> Участники конференции:
>>
>> * Брюс Момджан (Bruce Momjian), координатор разработки PostgreSQL,
>> лидер сообщества PostgreSQL, эксперт компании EnterpriseDB.
>> * Максим Богук, ведущий администратор баз данных компаний Rambler и
>> Мастерхост, известный специалист по PostgreSQL.
>> * Фёдор Сигаев, разработчик подсистемы полнотекстового поиска, систем
>> индексации GiST, GIN и множества дополнительных модулей, один из
>> основных членов PostgreSQL Global Development Group.
>> * Олег Бартунов, один из основных членов PostgreSQL Global Development
>> Group, разработчик подсистемы полнотекстового поиска, систем
>> индексации GiST, GIN, разработчик многотерабайтных научных баз данных.
>>
>> Начало конференции: среда, 30 июля 2008 года, 15:00 по московскому
>> времени. Продолжительность: 3 часа. Вопросы будут приниматься в
>> jabber-канале postgresmen@conference.jabber.org (основной способ) и в
>> skype:postgresmen. После конференции вопросы и ответы будут
>> опубликованы на сайте http://postgresmen.ru.
>>
>> --
>> Regards,
>> Ivan
>>
>
> --
> Sent via pgsql-ru-general mailing list (pgsql-ru-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-ru-general
>

--
Best regards,
Nikolay

--
Sent via pgsql-ru-general mailing list (pgsql-ru-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-ru-general