Friday, September 5, 2008

Re: [JDBC] Problem With Euro character

Hello Tno,

You know that ISO-8859-1 does not have the Euro Symbol? Look for
ISO-8859-15 instead...

Daniel Migowski


TNO schrieb:
> Hello
>
> I'm working on a french web application (Spring+Ibatis+Postgre).
> My PostgreSql version : 8.1
> My db encoding is UTF-8.
> My JDBc Driver version is 8.1-405-jdbc3
>
> There is no problem to display the Euro (€) character in HTML pages,
> but in the PDF this caracter disepear !!!
> Strange...
>
> I've done a little test, this is the recuperation of a String "*€ € €
> € € Euro € Euro € € € €" *:
>
> log.info(c.getObservation());
> log.info(new String (c.getObservation().getBytes("UTF-8")));
> log.info(new String (c.getObservation().getBytes("ISO-8859-1")));
>
> INFO 17:49:10.468 ? ? ? ? ? Euro ? Euro ? ? ? ? (TestEuro.java:14)
> INFO 17:49:10.468 € € € € € Euro € Euro € € € €
> (TestEuro.java:15)
> INFO 17:49:10.468 € € € € € Euro € Euro € € € € (TestEuro.java:16)
>
> Very strange, it seems that my observation have an encoding ISO-8859-1
> in my db UTF-8...


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

[JDBC] Problem With Euro character

Hello

I'm working on a french web application (Spring+Ibatis+Postgre).
My PostgreSql version : 8.1
My db encoding is UTF-8.
My JDBc Driver version is 8.1-405-jdbc3

There is no problem to display the Euro (€) character in HTML pages, but in the PDF this caracter disepear !!!
Strange...

I've done a little test, this is the recuperation of a String "€ € € € € Euro € Euro € € € €" :

log.info(c.getObservation());
log.info(new String (c.getObservation().getBytes("UTF-8")));
log.info(new String (c.getObservation().getBytes("ISO-8859-1")));

INFO  17:49:10.468 ? ? ? ? ? Euro ? Euro ? ? ? ?  (TestEuro.java:14)
INFO  17:49:10.468 € € € € € Euro € Euro € € € €  (TestEuro.java:15)
INFO  17:49:10.468 € € € € € Euro € Euro € € € €  (TestEuro.java:16)

Very strange, it seems that my observation have an encoding ISO-8859-1 in my db UTF-8...

if you have any idea...



Antivirus avast!: message Sortant sain.

Base de donnees virale (VPS) : 080905-0, 05/09/2008
Analyse le : 05/09/2008 18:06:27
avast! - copyright (c) 1988-2008 ALWIL Software.


Re: [HACKERS] plpgsql is not translate-aware

Alvaro Herrera <alvherre@commandprompt.com> writes:
> In reviewing Volkan Yazici's (sorry for the dots) patch to improve
> plpgsql's error messages, I noticed that we have no PO files for plpgsql
> at all!

Ugh. Yeah, we should fix that. Does it actually just work, seeing
that plpgsql is a loadable library?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] plpgsql is not translate-aware

Alvaro Herrera wrote:

> It doesn't seem hard to add; I just had to create a nls.mk file and
> things seem ready to go. Obviously, we'll need to add plpgsql to the
> pgtranslation files in pgfoundry.

Actually this is wrong -- since the library is going to run with
"postgres" text domain, we need to add the files to the backend's
nls.mk:


Index: nls.mk
===================================================================
RCS file: /home/alvherre/Code/cvs/pgsql/src/backend/nls.mk,v
retrieving revision 1.22
diff -c -p -u -r1.22 nls.mk
--- nls.mk 24 Mar 2008 18:08:47 -0000 1.22
+++ nls.mk 5 Sep 2008 16:00:18 -0000
@@ -7,7 +7,7 @@ GETTEXT_FILES := + gettext-files
GETTEXT_TRIGGERS:= _ errmsg errdetail errdetail_log errhint errcontext write_stderr yyerror

gettext-files: distprep
- find $(srcdir)/ $(srcdir)/../port/ -name '*.c' -print >$@
+ find $(srcdir)/ $(srcdir)/../port/ $(srcdir)/../pl/ -name '*.c' -print >$@

my-maintainer-clean:
rm -f gettext-files

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] Re: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Manual de instalación

Cuidado! Donde se entere Gabriel de que estamos haciendo mella en la
herida se volverá a poner a defender a los "newbies" (bonita forma de
defender a alguien). Y para que quieren que esto no va a tener fin.
Coincido con Edwin en dejar de alimentar trolls.

Saludos a todos.

Jaime Casanova escribió:
> 2008/9/4 Moises Alberto Lindo Gutarra <mlindo@gmail.com>:
>
>> Es bueno conocer algún tema, tanto básico como avanzado, pero
>> es mejor compartir de buena forma y no como mofa. Y no es asi
>> la instalación en windows, te olvidaste decirle que tiene que tener un
>> usuario no administrador
>> con el cual deberá correrse el servicio windows.
>>
>>
>
> ah! te refieres a ese usuario que te crea el mismo instalador (cuando
> le das a "Siguiente" claro) y que ademas, si no le pones ninguna
> clave, el mismo le genera una?
>
>


--
-------------------------------------------------------------------------------------------
L.A. Jenaro Centeno Gómez

Al-Día se renueva con la Mejora Continua
Departamento de Tecnologías de la Información
Alimentos La Concordia, S.A. de C.V.
Tel. 01 474 741 9200
Ext. 9280
www.aldia.com.mx

--
TIP 4: No hagas 'kill -9' a postmaster

Re: [pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

Javier Chávez B. escribió:
> 2008/9/2 Jaime Casanova <jcasanov@systemguards.com.ec>:
>
>> 2008/9/2 Alvaro Herrera <alvherre@alvh.no-ip.org>:
>>
>>>> Déjame llamar a Mario para preguntarle cómo podemos volver a levantar la
>>>> plataforma.
>>>>
>>> OK, dice que ya estaba en ello; está en espera del respaldo más reciente
>>> de la BD, y en cuanto lo tenga lo echará a andar en otro hosting.
>>>
>>>
>> Excelente, por favor avisame en cuanto este arriba..
>>
>> --
>> Atentamente,
>> Jaime Casanova
>> Soporte y capacitación de PostgreSQL
>> Asesoría y desarrollo de sistemas
>> Guayaquil - Ecuador
>> Cel. (593) 87171157
>> --
>> TIP 9: visita nuestro canal de IRC #postgresql-es en irc.freenode.net
>>
>>
>
> Estimados Resumiendo un poco entonces quienes se han manifestado en
> traducir hasta ahora son (orden cronologico de aparicion de los
> correos ) :
>
> - Moises Galan
> - Guido Barosio
> - Raul Duque
> - Teofilo Oviedo
> - Gilberto Castillo
> - Miguel Panuera
> - Javier Chavez
>
> Omiti a Alvaro Herrera y Jaime Casanova, porque estoy asumiendo que
> como siempre tendremos su ayuda ( como alguien dijo por ahi en un par
> de correos, asumiendo su calidad de Robot :-) )
>
> Esto es solo para que Moises quien preparo la documentacion inicial
> tenga mas o menos claro quienes se ofrecieron inicialmente, y en su
> defecto, si tenemos un administrador / coordinador pueda ir teniendo
> en cuenta las personas con que se cuenta inicialmente.
>
> Bueno revisare ahora un poco del manual para ver cantidades de
> paginas, contenidos subdivisiones etc.
>
> Buenas Noches a to2
>
> Jch
>
>
>
>
Apuntame también por favor.

--
-------------------------------------------------------------------------------------------
L.A. Jenaro Centeno Gómez

Al-Día se renueva con la Mejora Continua
Departamento de Tecnologías de la Información
Alimentos La Concordia, S.A. de C.V.
Tel. 01 474 741 9200
Ext. 9280
www.aldia.com.mx

--
TIP 8: explain analyze es tu amigo

Re: [GENERAL] large inserts and fsync

Aaron Burnett <aburnett@bzzagent.com> writes:
> On 9/5/08 11:10 AM, "Sam Mason" <sam@samason.me.uk> wrote:
>> Have you tried bundling all the INSERT statements into a single
>> transaction?

> Yes, the developer already made sure of that and I verified.

Hmm, in that case the penalty probably comes from pushing WAL data out
to disk synchronously. It might be worth playing with wal_sync_method
and/or raising wal_buffers.

The trouble with turning fsync off is that a system crash midway through
the import might leave you with a corrupt database. If you're willing
to start over from initdb then okay, but if you are importing into a
database that already contains valuable data, I wouldn't recommend it.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] [Review] pgbench duration option

Hello again,

I received the following email from a helpful fellow off-list,
pointing out an error in my review:

On Fri, Sep 5, 2008 at 7:03 PM, Ragnar <gnari@hive.is> wrote:
> On fös, 2008-09-05 at 15:07 +1000, Brendan Jurd wrote:
>> Wouldn't this be better written as:
>>
>> if ((duration > 0 && timer_exceeded) || st->cnt >= nxacts)
>> {
>> <stop>
>> }
>
> sorry, but these do not lok as the same thing to me.
>
> in the first variant there will not be a stop if
> (duration > 0) and NOT (timer_exceeded) and (st->cnt >= nxacts)
> but in the second variant there will.
>
> admittedly, i have no idea if that situation can occur.
>
> gnari
>

gnari is right. Looking closer I see that nxacts defaults to 10 in
the absence of a -t option, so my version of the code would end up
stopping when the run reaches 10 transactions, even if the user has
specified a -T option.

Sorry for the error. The (duration > 0) test does in fact need to be separate.

Thanks for the catch, gnari.

Cheers,
BJ

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] large inserts and fsync

> > > Have you tried bundling all the INSERT statements into a single
> > > transaction?
> >
> > Yes, the developer already made sure of that and I verified.

I would verify that again, because fsync shouldn't make much of a difference
in that circumstance. I might not do all 16 million in one transaction, but
if you're doing 10 or 100 thousand at a time, it should be pretty fast.

A language-level auto-commit remains to be disabled, perhaps?


--
Alan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] change max_value in sequence

"Claus Guttesen" <kometen@gmail.com> writes:
> I have a table with a serial field defined with an older version of
> postgresql (ver. 7). Back then max_value was 2147483647:
> How can I increase it? By updating the max_value-field?

I think you're looking for ALTER SEQUENCE.

Note that if the column it's feeding into is int4, you'd also need to
alter the column type ...

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[HACKERS] plpgsql is not translate-aware

Hi,

In reviewing Volkan Yazici's (sorry for the dots) patch to improve
plpgsql's error messages, I noticed that we have no PO files for plpgsql
at all!

It doesn't seem hard to add; I just had to create a nls.mk file and
things seem ready to go. Obviously, we'll need to add plpgsql to the
pgtranslation files in pgfoundry.

There are 141 new strings to translate, and from spanish I get 71
fuzzies, so it seems an easy project.

Should I go ahead and commit the initial files?

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] large inserts and fsync

On Fri, Sep 05, 2008 at 11:19:13AM -0400, Aaron Burnett wrote:
> On 9/5/08 11:10 AM, "Sam Mason" <sam@samason.me.uk> wrote:
> > On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
> >> For an upcoming release there is a 16 million row insert that on our test
> >> cluster takes about 2.5 hours to complete with all indices dropped
> >> beforehand.
> >>
> >> If I turn off fsync, it completes in under 10 minutes.
> >
> > Have you tried bundling all the INSERT statements into a single
> > transaction?
>
> Yes, the developer already made sure of that and I verified.

I was under the impression that the only time PG synced the data to disk
was when the transaction was COMMITed. I've never needed to turn off
fsync for performance reasons even when pulling in hundreds of millions
of rows. I do tend to use a single large COPY rather than many small
INSERT statements. PG spends an inordinate amount of time parsing
millions of SQL statements, whereas a tab delimited file is much easier
to parse.

Could you try bumping "checkpoint_segments" up a bit? or have you tried
that already?


Sam

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [DOCS] Incorrect description of xmax and xip in functions docs

On Fri, 2008-09-05 at 16:14 +0100, Simon Riggs wrote:
> http://developer.postgresql.org/pgdocs/postgres/functions-info.html
>
> xip_list is described as
>
> "Active txids at the time of the snapshot... "
>
>
> This is incorrect. The xip_list is the list of transactions that are in
> progress *and* less than xmax. There may be transactions in progress
> with an xid higher than xmax. This will happen frequently in fact. This
> is because xmax is defined as the highest/latest completed xid, not the
> highest running xid.
>
> Note that there is no way to discover the list of running xids at the
> time of the snapshot, from the data we hold about snapshots. Nor can the
> snapshot data be used to monitor the number of transactions in progress.
>
> Anyone disagree? If not, I'll patch.

My rewording would be:
"Active txids at the time of the snapshot. The list includes only those
active txids between xmin and xmax; there may be active txids higher
than xmax. A txid that is xmin <= txid < xmax and not in this list was
already completed at the time of the snapshot, and thus either visible
or dead according to its commit status. The list does not include txids
of subtransactions."

And for txid_visible_in_snapshot() comment added:
"Function should not be used with subtransaction xids. It is possible
that this function will return a true result for a subtransaction xid
that was actually still in progress at the time of the snapshot".

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

[ADMIN] How can I avoid Frozenxid wraparound on failover to a standby(PITR) database?

I have a fairly large database(approx. 1.5TB) that is backed up by a warm standby database using log shipping(PITR). This setup had been running for a couple of months when I ran into a problem on the primary DB and had to failover to the standby DB. This worked as expected.
 
Shortly thereafter(Sometime over the long weekend of course), Postgres shutdown the database to avoid XID wraparound data loss. I presume there were warnings in the log about running out of XIDs, but nobody noticed in time and given what transpired after that I don't think it would have mattered if they had.
 
As per the documentation, I started the DB in single user mode and attempted to do a full database vacuum. After this ran for about 12hours the pg_xlog directory ran out of disk space. I'm not sure I understand why anything is written to pg_xlog as part of the vacuum process, perhaps someone can enlighten me.
 
I next started looking at the age(refrozenxid) of the tables in my DB, and was surprised to see that over 4000 of the 5000 tables in this DB had an age over 2Billion. So thats 4000 tables representing over a terabyte of data that need to be vacuumed! I am now vacuuming those tables one at a time, which is taking a long time(This is a scripted process). So there is no way I could have vacuumed the tables quickly enough even given a warning of impending XID wraparound.
 
Looking through the support mailing lists(Bugs) I see some discussion about the frozenxid  updates on the master not being propogated to the slave through the WAL logs, and comments from Tom, Alvaro and Heikki suggesting that they were looking into a solution for PG 8.3 and needed a way around the problem in PG 8.2.
 
I am currently running PG 8.2.4 on FreeBSD.
 
So my questions are:
 
1) What is the recommended way to either solve or get around this problem in PG 8.2.4?
2) Is this "problem" fixed in some more current version of Postgres? I didn't see any mention of it in release notes up to PG 8.3.3?
3) Does this mean that if you are trying to use a warm standby DB with PITR, you need to make a new base backup of your primary DB every 1.5billion transactions, or there abouts, to avoid the problem. If so, I think this should be documented in the "Caveats" section of "Continuous Archiving and Point-in-time-recovery(PITR)" section of the manual. 
 
Regards...
 
Mark Sherwood
 


Use Windows Live Messenger to send messages to your buddies on their mobile phones Find out more on our PC to Mobile website

Re: [NOVICE] Problem wth postgresql.conf

"=?KOI8-R?B?59LJx8/Sycog7snLz87P0s/X?=" <grigory.nikonorov@gmail.com> writes:
> FATAL: syntax error in file "/opt/PostgreSQL/8.3/data/postgresql.conf" line
> 108, near token "MB"
> How can i fix it ?

> shared_buffers = 128MB # min 128kB or max_connections*16kB

I think you need quotes here:

shared_buffers = '128MB'

regards, tom lane

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [GENERAL] xml queries & date format

Jef Peeraer <jef.peeraer@telenet.be> writes:
> i am using the xml add-ons, but the date output format seems to be wrong :

I think the conversion to xml intentionally always uses ISO date format,
because that's required by some spec somewhere.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] plproxy - plproxy: v2.0.6

Log Message:
-----------
v2.0.6

Modified Files:
--------------
plproxy:
Makefile (r1.28 -> r1.29)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/plproxy/plproxy/Makefile.diff?r1=1.28&r2=1.29)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] plproxy - plproxy: v2.0.6

Log Message:
-----------
v2.0.6

Modified Files:
--------------
plproxy:
NEWS (r1.12 -> r1.13)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/plproxy/plproxy/NEWS.diff?r1=1.12&r2=1.13)
plproxy/debian:
changelog (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/plproxy/plproxy/debian/changelog.diff?r1=1.4&r2=1.5)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] max_stack_depth Exceeded

Magnus Hagander <magnus@hagander.net> writes:
> Ow Mun Heng wrote:
>> Am I doing something wrong?

> If your trigger is defined on the head_raw_all_test_2 table, then yes.
> Because it will do a new insert there, and the new insert will fire the
> trigger again, which will do a new insert, which wil lfire the trigger etc.

Of course, the way to have the row be inserted into the parent table is
to just let the trigger return it, instead of returning null.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] large inserts and fsync

Yes, the developer already made sure of that and I verified.


On 9/5/08 11:10 AM, "Sam Mason" <sam@samason.me.uk> wrote:

> On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
>> For an upcoming release there is a 16 million row insert that on our test
>> cluster takes about 2.5 hours to complete with all indices dropped
>> beforehand.
>>
>> If I turn off fsync, it completes in under 10 minutes.
>
> Have you tried bundling all the INSERT statements into a single
> transaction? If you haven't then PG will run each statement in its own
> transaction and then commit each INSERT statement to disk separately,
> incurring large overheads.
>
>
> Sam


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[DOCS] Incorrect description of xmax and xip in functions docs

http://developer.postgresql.org/pgdocs/postgres/functions-info.html

xip_list is described as

"Active txids at the time of the snapshot... "


This is incorrect. The xip_list is the list of transactions that are in
progress *and* less than xmax. There may be transactions in progress
with an xid higher than xmax. This will happen frequently in fact. This
is because xmax is defined as the highest/latest completed xid, not the
highest running xid.

Note that there is no way to discover the list of running xids at the
time of the snapshot, from the data we hold about snapshots. Nor can the
snapshot data be used to monitor the number of transactions in progress.

Anyone disagree? If not, I'll patch.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

Re: [GENERAL] large inserts and fsync

On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
> For an upcoming release there is a 16 million row insert that on our test
> cluster takes about 2.5 hours to complete with all indices dropped
> beforehand.
>
> If I turn off fsync, it completes in under 10 minutes.

Have you tried bundling all the INSERT statements into a single
transaction? If you haven't then PG will run each statement in its own
transaction and then commit each INSERT statement to disk separately,
incurring large overheads.


Sam

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] indexing for distinct search in timestamp based table

You might get great improvement for '%' cases using index on channel_name(<field>, start_time) and a little bit of pl/pgsql

Basically, you need to implement the following algorithm:
 1) curr_<field> = ( select  min(<field>) from ad_log )
 2) record_exists = ( select 1 from ad_log where <field>=cur_<field> and _all_other_conditions limit 1 )
 3) if record_exists==1 then add curr_<field> to the results
 3) curr_<field> = (select min(<field>) from ad_log where <field>  >  curr_<field> ) 
 4) if curr_<field> is not null then goto 2


I believe it might make sense implement this approach in the core (I would call it "index distinct scan")

That could dramatically improve "select distinct <column> from <table>" and "select <column> from <table> group by <column>" kind of queries when there exists an index on <column> and a particular column has very small number of distinct values.

For instance:  say a table has 10'000'000 rows, while column of interest has only 20 distinct values. In that case, the database will be able to get every of those 20 values in virtually 20 index lookups.

What does the community think about that?

Re: [GENERAL] a performence question

2008/9/4 Rafal Pietrak <rafal@zorro.isa-geek.com>:
> Hi,
>
> Maybe someone on this list actually have already tried this:
>
> I'm planning to make a partitioned database. From Postgres documentation
> I can see, that there are basically two methods to route INSERTS into
> partitioned table:
> one. is a TRIGGER
> other. is a RULE
>
> My Table will have over 1000 partitions. Some not so big, but
> significant number of them will be of multimillion rows. Partitioning
> will be done using a single column, on equality.... meaning:
>
> CREATE TABLE mainlog (sel int, tm timestamp, info text,...);
> CREATE TABLE mainlog_p1 (CHECK (sel=1)) INHERITS (mainlog);
> CREATE TABLE mainlog_p2 (CHECK (sel=2)) INHERITS (mainlog);
> ...etc.
>
> If I route INSERT with a TRIGGER, the function would look like:
> CREATE .... TRIGGER...AS $$ DECLARE x RECORD; BEGIN
> SELECT id INTO x FROM current_route; NEW.sel := x.id;
> IF NEW.sel = 1 THEN INSERT INTO mainlog_p1 VALUES (NEW.*);
> ELSE IF NEW.sel = 2 THEN INSERT INTO mainlog_p2 VALUES (NEW.*);
> ....
> END IF;
> RETURN NULL;
> $$;
>
> If I route INSETS with a RULE, I'd have something like 1000 rules hooked
> up to MAINLOG, all looking like:
> CREATE RULE .... ON INSERT ... WHERE EXISTS(SELECT 1 FROM current_route
> WHERE id = 1) DO INSTEAD INSERT INTO mainlog_p1 VALUES SELECT
> x.id,new.tm... FROM (SELECT id FROM current_route) x;
> ... and similar RULES for cases "WHERE id = 2", etc.
>
> My question is, where should I expect better performance on those
> INSERTS).
>
> I would prefer a set of RULES (as I wouldn't like to rewrite TRIGGER
> function every time I add a partition ... a thousand lines function),
> but since they all must make a select query on CURRENT_ROUTE table, may
> be that will not be particularly effective? The TRIGGER function does a
> single query - may be it'll be faster? I was planning to generate some
> dummy data and run a simulation, but may be someone already has that
> experience? Or maybe the TRIGGER should look differently? Or the set of
> RULES?
>

I had a bit spare time so I tested this

see http://filip.rembialkowski.net/postgres-partitioning-performance-rules-vs-triggers/

seems that in your scenario trigger will be better.

but If I had to do this, and if performance was very important, I
would move "partition selection" logic out of the INSERT phase. the
application can know this before the actual insert. unless you want to
shift selections very often...

--
Filip Rembiałkowski

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Verbosity of Function Return Type Checks

Index: src/pl/plpgsql/src/pl_exec.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.219
diff -c -r1.219 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c 1 Sep 2008 22:30:33 -0000 1.219
--- src/pl/plpgsql/src/pl_exec.c 5 Sep 2008 13:47:07 -0000
***************
*** 188,194 ****
Oid reqtype, int32 reqtypmod,
bool isnull);
static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
static void exec_set_found(PLpgSQL_execstate *estate, bool state);
static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
static void free_var(PLpgSQL_var *var);
--- 188,195 ----
Oid reqtype, int32 reqtypmod,
bool isnull);
static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static void validate_tupdesc_compat(TupleDesc expected, TupleDesc returned,
! const char *msg);
static void exec_set_found(PLpgSQL_execstate *estate, bool state);
static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
static void free_var(PLpgSQL_var *var);
***************
*** 384,394 ****
{
case TYPEFUNC_COMPOSITE:
/* got the expected result rowtype, now check it */
! if (estate.rettupdesc == NULL ||
! !compatible_tupdesc(estate.rettupdesc, tupdesc))
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("returned record type does not match expected record type")));
break;
case TYPEFUNC_RECORD:

--- 385,392 ----
{
case TYPEFUNC_COMPOSITE:
/* got the expected result rowtype, now check it */
! validate_tupdesc_compat(tupdesc, estate.rettupdesc,
! "returned record type does not match expected record type");
break;
case TYPEFUNC_RECORD:

***************
*** 705,715 ****
rettup = NULL;
else
{
! if (!compatible_tupdesc(estate.rettupdesc,
! trigdata->tg_relation->rd_att))
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("returned tuple structure does not match table of trigger event")));
/* Copy tuple to upper executor memory */
rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
}
--- 703,711 ----
rettup = NULL;
else
{
! validate_tupdesc_compat(trigdata->tg_relation->rd_att,
! estate.rettupdesc,
! "returned tuple structure does not match table of trigger event");
/* Copy tuple to upper executor memory */
rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
}
***************
*** 2199,2209 ****
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("record \"%s\" is not assigned yet",
rec->refname),
! errdetail("The tuple structure of a not-yet-assigned record is indeterminate.")));
! if (!compatible_tupdesc(tupdesc, rec->tupdesc))
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("wrong record type supplied in RETURN NEXT")));
tuple = rec->tup;
}
break;
--- 2195,2204 ----
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("record \"%s\" is not assigned yet",
rec->refname),
! errdetail("The tuple structure of a not-yet-assigned"
! " record is indeterminate.")));
! validate_tupdesc_compat(tupdesc, rec->tupdesc,
! "wrong record type supplied in RETURN NEXT");
tuple = rec->tup;
}
break;
***************
*** 2309,2318 ****
stmt->params);
}

! if (!compatible_tupdesc(estate->rettupdesc, portal->tupDesc))
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("structure of query does not match function result type")));

while (true)
{
--- 2304,2311 ----
stmt->params);
}

! validate_tupdesc_compat(estate->rettupdesc, portal->tupDesc,
! "structure of query does not match function result type");

while (true)
{
***************
*** 5145,5167 ****
}

/*
! * Check two tupledescs have matching number and types of attributes
*/
! static bool
! compatible_tupdesc(TupleDesc td1, TupleDesc td2)
{
! int i;

! if (td1->natts != td2->natts)
! return false;

! for (i = 0; i < td1->natts; i++)
! {
! if (td1->attrs[i]->atttypid != td2->attrs[i]->atttypid)
! return false;
! }

! return true;
}

/* ----------
--- 5138,5174 ----
}

/*
! * Validates compatibility of supplied TupleDesc pair by checking number and type
! * of attributes.
*/
! static void
! validate_tupdesc_compat(TupleDesc expected, TupleDesc returned, const char *msg)
{
! int i;

! if (!expected || !returned)
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("%s", msg)));

! if (expected->natts != returned->natts)
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("%s", msg),
! errdetail("Number of returned columns (%d) does not match expected column count (%d).",
! returned->natts, expected->natts)));

! for (i = 0; i < expected->natts; i++)
! if (expected->attrs[i]->atttypid != returned->attrs[i]->atttypid)
! ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! errmsg("%s", msg),
! errdetail("Returned type \"%s\" does not match expected type \"%s\" in column \"%s\".",
! format_type_with_typemod(returned->attrs[i]->atttypid,
! returned->attrs[i]->atttypmod),
! format_type_with_typemod(expected->attrs[i]->atttypid,
! expected->attrs[i]->atttypmod),
! NameStr(expected->attrs[i]->attname))));
}

/* ----------
On Fri, 5 Sep 2008, Alvaro Herrera <alvherre@commandprompt.com> writes:
> Please use the patch I posted yesterday, as it had all the issues I
> found fixed. There were other changes in that patch too.

My bad. Patch is modified with respect to suggestions[1][2] from
Tom. (All 115 tests passed in cvs tip.)


Regards.

[1] "char *msg" is replaced with "const char *msg".

[2] "errmsg(msg)" is replaced with 'errmsg("%s", msg)'.

Re: [HACKERS] 8.4devel out of memory

>>> "Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:

> ERROR: out of memory
> DETAIL: Failed on request of size 8.

> What would be the reasonable next step here?

I bet the log would be of interest. :-)

-Kevin

[COMMITTERS] npgsql - Npgsql2: Fixed ClearAllPools which were missing a line to

Log Message:
-----------

Fixed ClearAllPools which were missing a line to remove the ConnectorsList. Thanks Christian Holzner (support at tuga dot it) for heads up and patch.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlConnectorPool.cs (r1.11 -> r1.12)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlConnectorPool.cs.diff?r1=1.11&r2=1.12)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Need more reviewers!

On Fri, 2008-09-05 at 17:19 +0300, Marko Kreen wrote:
> >
> > I think this should be organised with different kinds of reviewer:
>
> The list is correct but too verbose. And it does not attack the core
> of the problem. I think the problem is not:
>
> What can/should I do?
>
> but instead:
>
> Can I take the responsibility?

Completely agree. The list was really an example of the different styles
of review that are possible, not a rigid categorisation that must be
followed.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] plproxy - plproxy: update

Log Message:
-----------
update

Modified Files:
--------------
plproxy:
AUTHORS (r1.3 -> r1.4)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/plproxy/plproxy/AUTHORS.diff?r1=1.3&r2=1.4)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] stackbuilder - wizard: Allow installed apps to be reinstalled.

Log Message:
-----------
Allow installed apps to be reinstalled.

Modified Files:
--------------
wizard:
AppList.cpp (r1.18 -> r1.19)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/AppList.cpp.diff?r1=1.18&r2=1.19)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Need more reviewers!

On 9/5/08, Simon Riggs <simon@2ndquadrant.com> wrote:
> On Fri, 2008-09-05 at 16:03 +0200, Markus Wanner wrote:
> > > I don't *want* the rule, I just think we *need* the rule because
> > > otherwise sponsors/managers/etc make business decisions to exclude that
> > > aspect of the software dev process.
> >
> > I agree that making sponsors/managers/etc aware of that aspect of the
> > dev process is necessary and worthwhile. However, I don't think a rule
> > for *patch submitters* helps with that. There must be other ways to
> > convince managers to encourage reviewers.
>
> Such as? You might think those arguments exist and work, but I would say
> they manifestly do not. Almost all people doing reviews are people that
> have considerable control over their own time, or are directed by people
> that understand the Postgres review process and wish to contribute to it
> for commercial reasons.

Well, the number of companies who are *interested* their patches getting
in is rather small... I think it's more common for companies to think
they are already donating to Postgres by encouraging their staff to
write patches and publish them.

So such applying such strict policy for everyone seems bad idea.
Although I quite agree on strongly encouraging patch submitters to review.
And those 3-4 companies who have direct commercial interests in Postgres
development should probably internally rethink their time allocation.

Note also we are only on our 2nd commitfest so its quite normal that
people are not used to the process .

We need to work few political aspects:

* Making reviewers to more at ease.
* Encouraging patch submitters to review.

And technical aspects:

* The (hopefully short and relaxed) rules for reviewers should be
more visible. Best would be on (every) Commitfest page.
* Wiki editing rules should be visible.

Well, and then:

* Although the wiki looks nice it's pain to operate.

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] 8.4devel out of memory

QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=142884.33..142932.05 rows=3181 width=13)
-> Hash Left Join (cost=25179.44..142868.43 rows=3181 width=13)
Hash Cond: (("*SELECT* 1"."matterNo")::text = ("M"."matterNo")::text)
Join Filter: ((("MH".date)::date <= (('1974-05-15'::date + generate_series(0, (('now'::text)::date - '1974-05-15'::date))))) AND (NOT (subplan)))
Filter: ((COALESCE(("MEC"."newStatusCode")::character varying, 'OP'::character varying))::text <> 'CL'::text)
-> Nested Loop (cost=529.05..66375.07 rows=126 width=49)
-> Nested Loop (cost=529.05..66339.68 rows=126 width=81)
Join Filter: ((("*SELECT* 1".date)::date <= (('1974-05-15'::date + generate_series(0, (('now'::text)::date - '1974-05-15'::date))))) AND (NOT (subplan)) AND (NOT (subplan)))
-> Result (cost=0.00..0.02 rows=1 width=0)
-> Hash Join (cost=529.05..26811.51 rows=1513 width=83)
Hash Cond: (("*SELECT* 1"."matterNo")::text = (s."matterNo")::text)
-> Append (cost=6.64..26033.63 rows=64090 width=70)
-> Subquery Scan "*SELECT* 1" (cost=6.64..25383.01 rows=36954 width=70)
-> Hash Join (cost=6.64..25013.47 rows=36954 width=135)
Hash Cond: (("MH"."matterEventCode")::text = ("MEC"."matterEventCode")::text)
-> Nested Loop (cost=0.57..23873.98 rows=105156 width=135)
-> Seq Scan on "Matter" "M" (cost=0.00..379.26 rows=27136 width=112)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" "MH" (cost=0.57..0.75 rows=8 width=23)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
-> BitmapOr (cost=0.57..0.57 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.28 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.28 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
-> Hash (cost=4.37..4.37 rows=136 width=8)
-> Seq Scan on "MatterEventCode" "MEC" (cost=0.00..4.37 rows=136 width=8)
Filter: ("newStageCode" IS NOT NULL)
-> Subquery Scan "*SELECT* 2" (cost=0.00..650.62 rows=27136 width=70)
-> Seq Scan on "Matter" "M" (cost=0.00..379.26 rows=27136 width=112)
Filter: (("matterType")::text <> 'LT'::text)
-> Hash (cost=514.39..514.39 rows=642 width=13)
-> Nested Loop (cost=6.23..514.39 rows=642 width=13)
Join Filter: (((d."matterNo")::text = (s."litigationMatterNo")::text) OR ((s."litigationMatterNo" IS NULL) AND ((d."matterNo")::text = (s."matterNo")::text)))
-> Bitmap Heap Scan on "Matter" d (cost=5.68..49.10 rows=642 width=13)
Recheck Cond: (("matterStatusCode")::text = ANY (('{OP,RO}'::character varying[])::text[]))
-> Bitmap Index Scan on "Matter_MatterStatusCode" (cost=0.00..5.52 rows=642 width=0)
Index Cond: (("matterStatusCode")::text = ANY (('{OP,RO}'::character varying[])::text[]))
-> Bitmap Heap Scan on "Matter" s (cost=0.55..0.68 rows=3 width=26)
Recheck Cond: (((d."matterNo")::text = (s."litigationMatterNo")::text) OR ((d."matterNo")::text = (s."matterNo")::text))
-> BitmapOr (cost=0.55..0.55 rows=3 width=0)
-> Bitmap Index Scan on "Matter_LitigationMatterNo" (cost=0.00..0.27 rows=2 width=0)
Index Cond: ((d."matterNo")::text = (s."litigationMatterNo")::text)
-> Bitmap Index Scan on "Matter_pkey" (cost=0.00..0.27 rows=1 width=0)
Index Cond: ((d."matterNo")::text = (s."matterNo")::text)
SubPlan
-> Nested Loop (cost=0.76..24.15 rows=1 width=722)
-> Nested Loop (cost=0.76..23.86 rows=1 width=563)
Join Filter: (NOT (subplan))
-> Index Scan using "Matter_pkey" on "Matter" (cost=0.00..0.47 rows=1 width=26)
Index Cond: (("matterNo")::text = ($0)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" (cost=0.76..1.66 rows=8 width=550)
Recheck Cond: (((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) OR ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text))
Filter: ((public."MatterHist".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text)
SubPlan
-> Nested Loop (cost=0.76..2.70 rows=1 width=722)
-> Nested Loop (cost=0.76..2.41 rows=1 width=563)
Join Filter: (ROW((public."MatterHist".date)::date, CASE WHEN ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) THEN (public."MatterHist"."matterHistSeqNo")::integer ELSE ((public."MatterHist"."matterHistSeqNo")::smallint + 10000) END) > ROW(($31)::date, CASE WHEN (($32)::text = ($33)::text) THEN ($34)::integer ELSE (($34)::smallint + 10000) END))
-> Index Scan using "Matter_pkey" on "Matter" (cost=0.00..0.47 rows=1 width=26)
Index Cond: (("matterNo")::text = ($0)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" (cost=0.76..1.66 rows=8 width=550)
Recheck Cond: (((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) OR ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text))
Filter: ((public."MatterHist".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC2" (cost=0.00..0.27 rows=1 width=159)
Index Cond: (("MEC2"."matterEventCode")::text = (public."MatterHist"."matterEventCode")::text)
Filter: (("MEC2"."removeMaintCode")::text = 'INA'::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC1" (cost=0.00..0.27 rows=1 width=159)
Index Cond: (("MEC1"."matterEventCode")::text = (public."MatterHist"."matterEventCode")::text)
Filter: (("MEC1"."newMaintCode")::text = 'INA'::text)
-> Result (cost=0.76..3.16 rows=2 width=359)
-> Append (cost=0.76..3.16 rows=2 width=359)
-> Nested Loop (cost=0.76..2.66 rows=1 width=135)
-> Nested Loop (cost=0.76..2.37 rows=1 width=135)
Join Filter: (ROW(("MH".date)::date, (CASE WHEN (("MH"."matterNo")::text = ("M"."matterNo")::text) THEN ("MH"."matterHistSeqNo")::integer ELSE (("MH"."matterHistSeqNo")::smallint + 10000) END)::smallint) > ROW(($2)::date, $3))
-> Index Scan using "Matter_pkey" on "Matter" "M" (cost=0.00..0.47 rows=1 width=112)
Index Cond: (("matterNo")::text = ($0)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" "MH" (cost=0.76..1.66 rows=8 width=23)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
Filter: (("MH".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC" (cost=0.00..0.27 rows=1 width=8)
Index Cond: (("MEC"."matterEventCode")::text = ("MH"."matterEventCode")::text)
Filter: ("MEC"."newStageCode" IS NOT NULL)
-> Index Scan using "Matter_pkey" on "Matter" "M" (cost=0.00..0.48 rows=1 width=112)
Index Cond: (("matterNo")::text = ($0)::text)
Filter: ((("matterType")::text <> 'LT'::text) AND (("filedDate")::date <= $1) AND (ROW(("filedDate")::date, 0::smallint) > ROW(($2)::date, $3)))
-> Index Scan using "Matter_pkey" on "Matter" "L" (cost=0.00..0.27 rows=1 width=13)
Index Cond: (("L"."matterNo")::text = (COALESCE("*SELECT* 1"."litigationMatterNo", "*SELECT* 1"."matterNo"))::text)
-> Hash (cost=24269.98..24269.98 rows=30433 width=70)
-> Nested Loop (cost=7.26..23965.65 rows=30433 width=35)
-> Hash Join (cost=6.74..2200.73 rows=30394 width=22)
Hash Cond: (("MH"."matterEventCode")::text = ("MEC"."matterEventCode")::text)
-> Seq Scan on "MatterHist" "MH" (cost=0.00..1496.22 rows=105022 width=23)
-> Hash (cost=5.34..5.34 rows=112 width=7)
-> Seq Scan on "MatterEventCode" "MEC" (cost=0.00..5.34 rows=112 width=7)
Filter: (("newStatusCode" IS NOT NULL) AND (("newStatusCode")::text <> 'CT'::text))
-> Bitmap Heap Scan on "Matter" "M" (cost=0.52..0.66 rows=3 width=26)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
Filter: (("M"."matterType")::text <> 'LT'::text)
-> BitmapOr (cost=0.52..0.52 rows=3 width=0)
-> Bitmap Index Scan on "Matter_pkey" (cost=0.00..0.26 rows=1 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "Matter_LitigationMatterNo" (cost=0.00..0.27 rows=2 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
SubPlan
-> Nested Loop (cost=0.76..2.66 rows=1 width=35)
-> Nested Loop (cost=0.76..2.37 rows=1 width=36)
Join Filter: (ROW(("MH".date)::date, (CASE WHEN (("MH"."matterNo")::text = ("M"."matterNo")::text) THEN ("MH"."matterHistSeqNo")::integer ELSE (("MH"."matterHistSeqNo")::smallint + 10000) END)::smallint) > ROW(($25)::date, $26))
-> Index Scan using "Matter_pkey" on "Matter" "M" (cost=0.00..0.47 rows=1 width=26)
Index Cond: (("matterNo")::text = ($24)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" "MH" (cost=0.76..1.66 rows=8 width=23)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
Filter: (("MH".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC" (cost=0.00..0.27 rows=1 width=7)
Index Cond: (("MEC"."matterEventCode")::text = ("MH"."matterEventCode")::text)
Filter: (("MEC"."newStatusCode" IS NOT NULL) AND (("MEC"."newStatusCode")::text <> 'CT'::text))
(139 rows)

QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=180410.55..180495.64 rows=5673 width=13)
-> Hash Left Join (cost=52015.63..180382.18 rows=5673 width=13)
Hash Cond: (("*SELECT* 1"."matterNo")::text = ("M"."matterNo")::text)
Join Filter: ((("MH".date)::date <= (('1974-05-15'::date + generate_series(0, (('now'::text)::date - '1974-05-15'::date))))) AND (NOT (subplan)))
Filter: ((COALESCE(("MEC"."newStatusCode")::character varying, 'OP'::character varying))::text <> 'CL'::text)
-> Nested Loop (cost=27365.34..63496.22 rows=225 width=49)
-> Hash Anti Join (cost=27365.34..63433.02 rows=225 width=81)
Hash Cond: (("*SELECT* 1"."matterNo")::text = ("*SELECT* 1"."matterNo")::text)
Join Filter: ((("*SELECT* 1".date)::date <= (('1974-05-15'::date + generate_series(0, (('now'::text)::date - '1974-05-15'::date))))) AND (ROW(("*SELECT* 1".date)::date, "*SELECT* 1"."matterHistRowOrder") > ROW(("*SELECT* 1".date)::date, "*SELECT* 1"."matterHistRowOrder")))
-> Nested Loop (cost=530.51..34570.69 rows=253 width=87)
Join Filter: ((("*SELECT* 1".date)::date <= (('1974-05-15'::date + generate_series(0, (('now'::text)::date - '1974-05-15'::date))))) AND (NOT (subplan)))
-> Result (cost=0.00..0.02 rows=1 width=0)
-> Hash Join (cost=530.51..26813.09 rows=1518 width=83)
Hash Cond: (("*SELECT* 1"."matterNo")::text = (s."matterNo")::text)
-> Append (cost=6.64..26033.70 rows=64091 width=70)
-> Subquery Scan "*SELECT* 1" (cost=6.64..25383.06 rows=36955 width=70)
-> Hash Join (cost=6.64..25013.51 rows=36955 width=135)
Hash Cond: (("MH"."matterEventCode")::text = ("MEC"."matterEventCode")::text)
-> Nested Loop (cost=0.57..23874.00 rows=105159 width=135)
-> Seq Scan on "Matter" "M" (cost=0.00..379.28 rows=27136 width=112)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" "MH" (cost=0.57..0.75 rows=8 width=23)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
-> BitmapOr (cost=0.57..0.57 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.28 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.28 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
-> Hash (cost=4.37..4.37 rows=136 width=8)
-> Seq Scan on "MatterEventCode" "MEC" (cost=0.00..4.37 rows=136 width=8)
Filter: ("newStageCode" IS NOT NULL)
-> Subquery Scan "*SELECT* 2" (cost=0.00..650.63 rows=27136 width=70)
-> Seq Scan on "Matter" "M" (cost=0.00..379.28 rows=27136 width=112)
Filter: (("matterType")::text <> 'LT'::text)
-> Hash (cost=515.82..515.82 rows=644 width=13)
-> Nested Loop (cost=6.24..515.82 rows=644 width=13)
Join Filter: (((d."matterNo")::text = (s."litigationMatterNo")::text) OR ((s."litigationMatterNo" IS NULL) AND ((d."matterNo")::text = (s."matterNo")::text)))
-> Bitmap Heap Scan on "Matter" d (cost=5.69..49.14 rows=644 width=13)
Recheck Cond: (("matterStatusCode")::text = ANY ('{OP,RO}'::text[]))
-> Bitmap Index Scan on "Matter_MatterStatusCode" (cost=0.00..5.53 rows=644 width=0)
Index Cond: (("matterStatusCode")::text = ANY ('{OP,RO}'::text[]))
-> Bitmap Heap Scan on "Matter" s (cost=0.55..0.68 rows=3 width=26)
Recheck Cond: (((d."matterNo")::text = (s."litigationMatterNo")::text) OR ((d."matterNo")::text = (s."matterNo")::text))
-> BitmapOr (cost=0.55..0.55 rows=3 width=0)
-> Bitmap Index Scan on "Matter_LitigationMatterNo" (cost=0.00..0.27 rows=2 width=0)
Index Cond: ((d."matterNo")::text = (s."litigationMatterNo")::text)
-> Bitmap Index Scan on "Matter_pkey" (cost=0.00..0.27 rows=1 width=0)
Index Cond: ((d."matterNo")::text = (s."matterNo")::text)
SubPlan
-> Nested Loop (cost=1.52..5.10 rows=1 width=0)
-> Nested Loop Anti Join (cost=1.52..4.82 rows=1 width=4)
Join Filter: (ROW((public."MatterHist".date)::date, CASE WHEN ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) THEN (public."MatterHist"."matterHistSeqNo")::integer ELSE ((public."MatterHist"."matterHistSeqNo")::smallint + 10000) END) > ROW((public."MatterHist".date)::date, CASE WHEN ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) THEN (public."MatterHist"."matterHistSeqNo")::integer ELSE ((public."MatterHist"."matterHistSeqNo")::smallint + 10000) END))
-> Nested Loop (cost=0.76..2.25 rows=1 width=36)
-> Index Scan using "Matter_pkey" on "Matter" (cost=0.00..0.47 rows=1 width=26)
Index Cond: (("matterNo")::text = ($4)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" (cost=0.76..1.66 rows=8 width=23)
Recheck Cond: (((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) OR ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text))
Filter: ((public."MatterHist".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text)
-> Nested Loop (cost=0.76..2.53 rows=1 width=32)
-> Nested Loop (cost=0.76..2.25 rows=1 width=36)
-> Index Scan using "Matter_pkey" on "Matter" (cost=0.00..0.47 rows=1 width=26)
Index Cond: (("matterNo")::text = ($4)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" (cost=0.76..1.66 rows=8 width=23)
Recheck Cond: (((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text) OR ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text))
Filter: ((public."MatterHist".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: ((public."MatterHist"."matterNo")::text = (public."Matter"."litigationMatterNo")::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC2" (cost=0.00..0.27 rows=1 width=4)
Index Cond: (("MEC2"."matterEventCode")::text = (public."MatterHist"."matterEventCode")::text)
Filter: (("MEC2"."removeMaintCode")::text = 'INA'::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC1" (cost=0.00..0.27 rows=1 width=4)
Index Cond: (("MEC1"."matterEventCode")::text = (public."MatterHist"."matterEventCode")::text)
Filter: (("MEC1"."newMaintCode")::text = 'INA'::text)
-> Hash (cost=26033.70..26033.70 rows=64091 width=38)
-> Append (cost=6.64..26033.70 rows=64091 width=38)
-> Subquery Scan "*SELECT* 1" (cost=6.64..25383.06 rows=36955 width=38)
-> Hash Join (cost=6.64..25013.51 rows=36955 width=135)
Hash Cond: (("MH"."matterEventCode")::text = ("MEC"."matterEventCode")::text)
-> Nested Loop (cost=0.57..23874.00 rows=105159 width=135)
-> Seq Scan on "Matter" "M" (cost=0.00..379.28 rows=27136 width=112)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" "MH" (cost=0.57..0.75 rows=8 width=23)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
-> BitmapOr (cost=0.57..0.57 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.28 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.28 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
-> Hash (cost=4.37..4.37 rows=136 width=8)
-> Seq Scan on "MatterEventCode" "MEC" (cost=0.00..4.37 rows=136 width=8)
Filter: ("newStageCode" IS NOT NULL)
-> Subquery Scan "*SELECT* 2" (cost=0.00..650.63 rows=27136 width=38)
-> Seq Scan on "Matter" "M" (cost=0.00..379.28 rows=27136 width=112)
Filter: (("matterType")::text <> 'LT'::text)
-> Index Scan using "Matter_pkey" on "Matter" "L" (cost=0.00..0.27 rows=1 width=13)
Index Cond: (("L"."matterNo")::text = (COALESCE("*SELECT* 1"."litigationMatterNo", "*SELECT* 1"."matterNo"))::text)
-> Hash (cost=24269.86..24269.86 rows=30434 width=70)
-> Nested Loop (cost=7.26..23965.52 rows=30434 width=35)
-> Hash Join (cost=6.74..2199.88 rows=30395 width=22)
Hash Cond: (("MH"."matterEventCode")::text = ("MEC"."matterEventCode")::text)
-> Seq Scan on "MatterHist" "MH" (cost=0.00..1495.35 rows=105025 width=23)
-> Hash (cost=5.34..5.34 rows=112 width=7)
-> Seq Scan on "MatterEventCode" "MEC" (cost=0.00..5.34 rows=112 width=7)
Filter: (("newStatusCode" IS NOT NULL) AND (("newStatusCode")::text <> 'CT'::text))
-> Bitmap Heap Scan on "Matter" "M" (cost=0.52..0.66 rows=3 width=26)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
Filter: (("M"."matterType")::text <> 'LT'::text)
-> BitmapOr (cost=0.52..0.52 rows=3 width=0)
-> Bitmap Index Scan on "Matter_pkey" (cost=0.00..0.26 rows=1 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "Matter_LitigationMatterNo" (cost=0.00..0.27 rows=2 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
SubPlan
-> Nested Loop (cost=0.76..2.65 rows=1 width=0)
-> Nested Loop (cost=0.76..2.37 rows=1 width=4)
Join Filter: (ROW(("MH".date)::date, (CASE WHEN (("MH"."matterNo")::text = ("M"."matterNo")::text) THEN ("MH"."matterHistSeqNo")::integer ELSE (("MH"."matterHistSeqNo")::smallint + 10000) END)::smallint) > ROW(($2)::date, $3))
-> Index Scan using "Matter_pkey" on "Matter" "M" (cost=0.00..0.47 rows=1 width=26)
Index Cond: (("matterNo")::text = ($0)::text)
Filter: (("matterType")::text <> 'LT'::text)
-> Bitmap Heap Scan on "MatterHist" "MH" (cost=0.76..1.66 rows=8 width=23)
Recheck Cond: ((("MH"."matterNo")::text = ("M"."matterNo")::text) OR (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text))
Filter: (("MH".date)::date <= $1)
-> BitmapOr (cost=0.76..0.76 rows=8 width=0)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."matterNo")::text)
-> Bitmap Index Scan on "MatterHist_pkey" (cost=0.00..0.38 rows=4 width=0)
Index Cond: (("MH"."matterNo")::text = ("M"."litigationMatterNo")::text)
-> Index Scan using "MatterEventCode_pkey" on "MatterEventCode" "MEC" (cost=0.00..0.27 rows=1 width=4)
Index Cond: (("MEC"."matterEventCode")::text = ("MH"."matterEventCode")::text)
Filter: (("MEC"."newStatusCode" IS NOT NULL) AND (("MEC"."newStatusCode")::text <> 'CT'::text))
(140 rows)

listen_addresses = '*'
port = 5512
max_connections = 200
shared_buffers = 256MB
temp_buffers = 10MB
max_prepared_transactions = 0
work_mem = 16MB
maintenance_work_mem = 400MB
max_fsm_pages = 1000000
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 4.0
wal_buffers = 256kB
checkpoint_segments = 50
seq_page_cost = 0.1
random_page_cost = 0.1
effective_cache_size = 3GB
geqo = off
default_statistics_target = 100
from_collapse_limit = 20
join_collapse_limit = 20
logging_collector = on
log_connections = on
log_disconnections = on
log_line_prefix = '[%m] %p %q<%u %d %r> '
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 10
autovacuum_analyze_threshold = 10
datestyle = 'iso, mdy'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
default_text_search_config = 'pg_catalog.english'
escape_string_warning = off
sql_inheritance = off
standard_conforming_strings = on
I was testing a very complex statistical query, with (among other
things) many EXISTS and NOT EXISTS tests against a build of the source
snapshot from 3 September. (The query looks pretty innocent, but
those aren't tables, they're complicated views.) Under 8.3.3 this
query runs successfully, but takes a few hours. I started it last
night before leaving, on the same machine where 8.3.3 has been
running, and in the morning found this:

olr=# explain analyze
SELECT
"MS"."sMatterNo",
CAST(COUNT(*) AS int) AS "count"
FROM
"MatterSearch" "MS"
JOIN "MatterDateStat" "S" ON
(
"S"."matterNo" = "MS"."sMatterNo" AND
"S"."isOnHold" = FALSE
)
WHERE
(
"MS"."matterStatusCode" IN ('OP', 'RO')
)
GROUP BY "MS"."sMatterNo"
;
ERROR: out of memory
DETAIL: Failed on request of size 8.

It was running for about half an hour before I left, and I didn't
notice the error, so I'm pretty sure it took longer than that for this
error to appear.

kgrittn@OLR-DEV-PG:~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 20G 8.0G 11G 43% /
tmpfs 2.0G 16K 2.0G 1% /dev/shm
/dev/sda3 253G 7.9G 245G 4% /var/pgsql/data
kgrittn@OLR-DEV-PG:~> free -m
total used free shared buffers
cached
Mem: 4049 2239 1809 0 94
1083
-/+ buffers/cache: 1061 2987
Swap: 1027 561 466

There are several development databases on this machine, all fairly
small, but enough that there's usually no significant free memory --
it gets used as cache. The 1.8 GB free this morning suggests that
something allocated and free a lot of memory.

kgrittn@OLR-DEV-PG:~/postgresql-snapshot> uname -a
Linux OLR-DEV-PG 2.6.5-7.286-bigsmp #1 SMP Thu May 31 10:12:58 UTC 2007
i686 i686 i386 GNU/Linux
kgrittn@OLR-DEV-PG:~/postgresql-snapshot> cat /proc/version
Linux version 2.6.5-7.286-bigsmp (geeko@buildhost) (gcc version 3.3.3
(SuSE Linux)) #1 SMP Thu May 31 10:12:58 UTC 2007
kgrittn@OLR-DEV-PG:~/postgresql-snapshot> cat /etc/SuSE-release
SUSE LINUX Enterprise Server 9 (i586)
VERSION = 9
PATCHLEVEL = 3

Attached are the plans from 8.3.3 and 8.4devel. Also attached are the
non-default 8.3.3 postgresql.conf settings; the file is the same for
8.4devel except for the port number. I don't know if the specifics of
the views and tables would be useful here, or just noise, so I'll omit
them unless someone asks for them.

What would be the reasonable next step here?

-Kevin

kgrittn@OLR-DEV-PG:~> /usr/local/pgsql-8.4dev/bin/pg_config
BINDIR = /usr/local/pgsql-8.4dev/bin
DOCDIR = /usr/local/pgsql-8.4dev/share/doc
HTMLDIR = /usr/local/pgsql-8.4dev/share/doc
INCLUDEDIR = /usr/local/pgsql-8.4dev/include
PKGINCLUDEDIR = /usr/local/pgsql-8.4dev/include
INCLUDEDIR-SERVER = /usr/local/pgsql-8.4dev/include/server
LIBDIR = /usr/local/pgsql-8.4dev/lib
PKGLIBDIR = /usr/local/pgsql-8.4dev/lib
LOCALEDIR = /usr/local/pgsql-8.4dev/share/locale
MANDIR = /usr/local/pgsql-8.4dev/share/man
SHAREDIR = /usr/local/pgsql-8.4dev/share
SYSCONFDIR = /usr/local/pgsql-8.4dev/etc
PGXS = /usr/local/pgsql-8.4dev/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--prefix=/usr/local/pgsql-8.4dev'
'--enable-integer-datetimes' '--enable-debug' '--disable-nls'
CC = gcc
CPPFLAGS = -D_GNU_SOURCE
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels
-fno-strict-aliasing -g
CFLAGS_SL = -fpic
LDFLAGS = -Wl,-rpath,'/usr/local/pgsql-8.4dev/lib'
LDFLAGS_SL =
LIBS = -lpgport -lz -lreadline -lcrypt -ldl -lm
VERSION = PostgreSQL 8.4devel
kgrittn@OLR-DEV-PG:~> /usr/local/pgsql-8.4dev/bin/pg_controldata
/var/pgsql/data/kgrittn
pg_control version number: 842
Catalog version number: 200808311
Database system identifier: 5242286260647024629
Database cluster state: in production
pg_control last modified: Thu 04 Sep 2008 05:17:28 PM CDT
Latest checkpoint location: 0/26E7A718
Prior checkpoint location: 0/26E7A6D4
Latest checkpoint's REDO location: 0/26E7A718
Latest checkpoint's TimeLineID: 1
Latest checkpoint's NextXID: 0/3561
Latest checkpoint's NextOID: 49152
Latest checkpoint's NextMultiXactId: 1
Latest checkpoint's NextMultiOffset: 0
Time of latest checkpoint: Thu 04 Sep 2008 05:17:28 PM CDT
Minimum recovery ending location: 0/0
Maximum data alignment: 4
Database block size: 8192
Blocks per segment of large relation: 131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 32
Maximum size of a TOAST chunk: 2000
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by reference
Maximum length of locale name: 128
LC_COLLATE: C
LC_CTYPE: C

Re: [HACKERS] code coverage patch

Gregory Stark wrote:
> Peter Eisentraut <peter_e@gmx.net> writes:
>
> > I have uploaded an example run here:
> > http://developer.postgresql.org/~petere/coverage/
> >
> > Current test coverage is about 66% overall.
>
> With some pretty glaring gaps: 0% coverage of geqo, 0% coverage of logtape
> which implies no tuplesorts are spilling to disk, no coverage of mark/restore
> on index scans...

Yah, that kinda shocked me too. Clearly we should spend some effort to
expand the regression tests a bit.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Need more reviewers!

Hi,

Simon Riggs wrote:
> Such as?

Dunno. Rules for sponsors? It would probably make sense to not only pay
a single developer to create and submit a patch, but instead plan for
paying others to review the code as well.

> You might think those arguments exist and work, but I would say
> they manifestly do not.

Most managers - especially within software companies I'd say - are
pretty much aware of how costly quality assurance (or the lack thereof)
can be, no?

What do you respond to potential sponsors who request that a new feature
must be accepted into Postgres itself?

Let's tell *them* that review is costly. Encourage them to pay others to
review your work, for example. Let's coopete ;-) (or whatever the verb
for coopetition is)

Maybe we can do more WRT organizing this reviewing process, including
payment. Some sort of bounty system or something. Dunno, this is just
some brainstorming.

> Almost all people doing reviews are people that
> have considerable control over their own time, or are directed by people
> that understand the Postgres review process and wish to contribute to it
> for commercial reasons.

Sure. I don't quite get where you are going with this argument, sorry.

Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] rpm install not recognized by yum.

On Thu, Sep 4, 2008 at 4:50 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, Sep 4, 2008 at 2:42 PM, slamp slamp <slackamp@gmail.com> wrote:
>> ok i managed to get this to work. however i still get the "Repository
>> pgdg82 is listed more than once in the configuration", this is
>> probably a rhel yum bug.
>
> Sure you don't have it again in an included yum .conf file? That's
> happened to me before.
>

im pretty sure, i even uninstalled the repo rpm and added it manually
to yum.conf and it gave the same message. centos does not seem to have
this issue.

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin