Monday, June 9, 2008

[COMMITTERS] pgrecovery - alpha: New Directory

Update of /cvsroot/pgrecovery/alpha/src
In directory pgfoundry.org:/tmp/cvs-serv81892/src

Log Message:
Directory /cvsroot/pgrecovery/alpha/src added to the repository


--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgrecovery - alpha: New Directory

Update of /cvsroot/pgrecovery/alpha
In directory pgfoundry.org:/tmp/cvs-serv79899/alpha

Log Message:
Directory /cvsroot/pgrecovery/alpha added to the repository


--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgeu-general] Date for the EU conference

Hi,

On Tue, 2008-06-10 at 07:29 +0200, Gabriele Bartolini wrote:
> It's October 17 and 18 in Prato, Italy. We are finalising the setup
> from an administrative point of view.

Perfect. Let me book my flight, then :)

Regards,
--
Devrim GÜNDÜZ , RHCE
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/

[PATCHES] VACUUM Improvements - WIP Patch

Here is a WIP patch based on the discussions here:
http://archives.postgresql.org/pgsql-hackers/2008-05/msg00863.php

The attached WIP patch improves the LAZY VACUUM by limiting or
avoiding the second heap scan. This not only saves considerable time
in VACUUM, but also reduces the double-writes of vacuumed blocks. If
the second heap scan is considerably limited, that should also save
CPU usage and reduce WAL log writing.

With HOT, the first heap scan prunes and defrags every page in the
heap. That truncates all the dead tuples to their DEAD line pointers
and releases all the free space in the page. The second scan only
removes these DEAD line pointers and records the free space in the
FSM. The free space in fact does not change from the first pass. But
to do so, it again calls RepairPageFragmentation on each page, dirties
the page and calls log_heap_clean() again on the page. This clearly
looks like too much work for a small gain.

As this patch stands, the first phase of vacuum prunes the heap pages
as usual. But it marks the DEAD line pointers as DEAD_RECLAIMED to
signal that the index pointers to these line pointers are being
removed, if certain conditions are satisfied. Other backend when
prunes a page, also reclaims DEAD_RECLAIMED line pointers by marking
them UNUSED. We need some additional logic to do this in a safe way:

- An additional boolean pg_class attribute (relvac_inprogress) is used
to track the status of vacuum on a relation. If the attribute is true,
either vacuum is in progress on the relation or the last vacuum did
not complete successfully.

When VACUUM starts, it sets relvac_inprogress to true. The transaction
is committed and a new transaction is started so that all other
backends can see the change. We also note down the transactions which
may already have the table open. VACUUM then starts the first heap
scan. It prunes the page, but it can start marking the DEAD line
pointers as DEAD_RECLAIMED only after it knows that all other backends
can see that VACUUM is in progress on the target relation. Otherwise
there is a danger that backends might reclaim DEAD line pointers
before their index pointers are removed and that would lead to index
corruption. We do that by periodic conditional waits on the noted
transactions ids. Once all old transactions are gone, VACUUM sets the
second scan limit to the current block number and starts marking
subsequent DEAD line pointers as DEAD_RECLAIMED.

In most of the cases where the old transactions quickly go away, and
for large tables, the second scan will be very limited. In the worst
case, we might incur the overhead of conditional waits without any
success.

TODO:

- We can potentially update FSM at the end of first pass. This is not
a significant issue if the second scan is very limited. But if we do
this, we need to handle the truncate case properly.

- As the patch stands, we check of old transactions at every block
iteration. This might not be acceptable for the cases where there are
long running transactions. We probably need some exponential gap here.

- As the patch stands, the heap_page_prune handles reclaiming the
DEAD_RECLAIMED line pointers since it already has ability to WAL log
similar changes. We don't do any extra work to trigger pruning though
(except than setting page_prune_xid). May be we should trigger pruning
if we got a line pointer bloat in a page too.

Please let me know comments/suggestions and any other improvements.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

Re: [pgeu-general] Date for the EU conference

Ciao Devrim,

Devrim GÜNDÜZ ha scritto:
> Did I miss any thread? Did we decide about the EU conference date?
> I want to make my travel plans ;)

Fair enough. :)

It's October 17 and 18 in Prato, Italy. We are finalising the setup from
an administrative point of view. Thanks.

Ciao,
Gabriele

--
Gabriele Bartolini: Open source programmer and data architect
Current Location: Prato, Tuscany, Italy
Associazione Italian PostgreSQL Users Group: www.itpug.org
gabriele.bartolini@gmail.com | www.gabrielebartolini.it
"If I had been born ugly, you would never have heard of Pelé", George Best
http://www.linkedin.com/in/gbartolini

--
Sent via pgeu-general mailing list (pgeu-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgeu-general

Re: [ADMIN] turn off auto index for foreign key

um...Your soln is possible. But if the user created an unique index using constraint name <table_name>_<column_name>_key5, it will not be able to distinguish it from other auto index.

On Tue, Jun 10, 2008 at 12:58 PM, Alex Hunsaker <badalex@gmail.com> wrote:
On Mon, Jun 9, 2008 at 10:17 PM, Tommy Cheng <csckcac@gmail.com> wrote:
> you are right. I double checked. So, is there any way to turn off auto index
> creation for the parent? (the best is changing postgresql database setting)
> I am using postgres (PostgreSQL) 8.1.9 on CentOS 5 x86_64
>

Again, there is no auto index creation for foriegn keys.  However if
you declared the column unique or its the pkey (ala unique) (As Scott
said) then there will  an index.

No you cant turn it off because its how postgres enforces uniqueness
see http://www.postgresql.org/docs/8.1/interactive/index-unique-checks.html
for more

My guess, use the name of the index to try to distinguish.  Auto
generated ones will always be in the form:
pkey: <table_name>_pkey(num)
unique: <table_name>_<column_name>_key(num)

Re: [pgsql-es-ayuda] Error al consultar una vista

2008/6/9 Alvaro Herrera <alvherre@commandprompt.com>:
> Jairo Graterón escribió:
>
>> *2008-05-28 19:17:29 GMT LOG: statement: SELECT * , "ctid" FROM
>> view_egresocaja where fecha = '2008-05-28'
>> ORDER BY idegresocaja
>
>> **2008-05-28 19:17:29 GMT ERROR: column "ctid" does not exist at character
>> 12
>> 2008-05-28 19:17:29 GMT STATEMENT: SELECT * , "ctid" FROM view_egresocaja
>> where fecha = '2008-05-28'
>> ORDER BY idegresocaja*
>>
>> por una extraña razón le esta agregando el campo "ctid" que no existe a la
>> consulta,
>
> Creo que psqlODBC usa la columna ctid (que agrega automaticamente a la
> consulta) para identificar una tupla que posteriormente se modificara
> (update o delete, supongo). Las vistas no tienen CTID. Supongo que
> tiene que ser posible indicarle a psqlODBC "esta es una vista" o mas
> bien "esta consulta no sera usada en un cursor actualizable", o algo por
> el estilo.
>

tengo un programa que hice alguna vez que se conecta a un odbc que
escoja y ejecuta la sentencia (o script) sql que le diga...
probe con psqlODBC versiones 08_00 hasta 08_03 (solo las ultimas
versiones de cada una) con 08_00 no pude conectar (no es gran
sorpresa), pero de ahi en adelante se conecto y con todos pude
consultar una vista sin problema... probe con 8.3 en windows (bueno en
realidad es 8.4devel pero es del 18 de mayo asi que supongo que no es
mucha la variacion)

revisando el post original veo que no solo se añade la columna "ctid"
sino tambien un ORDER BY, me parece que hay algo que no nos estan
contando

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

Re: [ADMIN] turn off auto index for foreign key

On Mon, Jun 9, 2008 at 10:17 PM, Tommy Cheng <csckcac@gmail.com> wrote:
> you are right. I double checked. So, is there any way to turn off auto index
> creation for the parent? (the best is changing postgresql database setting)
> I am using postgres (PostgreSQL) 8.1.9 on CentOS 5 x86_64
>

Again, there is no auto index creation for foriegn keys. However if
you declared the column unique or its the pkey (ala unique) (As Scott
said) then there will an index.

No you cant turn it off because its how postgres enforces uniqueness
see http://www.postgresql.org/docs/8.1/interactive/index-unique-checks.html
for more

My guess, use the name of the index to try to distinguish. Auto
generated ones will always be in the form:
pkey: <table_name>_pkey(num)
unique: <table_name>_<column_name>_key(num)

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[pgeu-general] Date for the EU conference

Did I miss any thread? Did we decide about the EU conference date?

I want to make my travel plans ;)

--
Devrim GÜNDÜZ , RHCE
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/

Re: [PATCHES] SQL: table function support

2008/6/10 Neil Conway <neilc@samurai.com>:
> On Tue, 2008-06-03 at 13:03 +0200, Pavel Stehule wrote:
>> this patch add support of table functions syntax like ANSI SQL 2003.
>
> I'm not necessarily opposed to this, but I wonder if we really need
> *more* syntax variants for declaring set-returning functions. The
> existing patchwork of features is confusing enough as it is...
>

internally is table functions implemenation identical with SRF.
Semantically is far - user's doesn't specify return type (what is from
PostgreSQL), but specifies return table, what is more natural. What
more - for users is transparent chaotic joice betwen "SETOF RECORD"
for multicolumns sets and "SETOF type".

Pavel

> -Neil
>
>
>

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] SQL: table function support

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Tom Lane escribió:
>> (It's also worth asking where the import is coming from. Who implements
>> the spec syntax anyway? DB2 maybe, but when was the last time we heard
>> from anyone trying to migrate from DB2 to PG?)

> Sourceforge?

They gave up on us years ago :-(

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] SQL: table function support

2008/6/10 Tom Lane <tgl@sss.pgh.pa.us>:
> Neil Conway <neilc@samurai.com> writes:
>> On Tue, 2008-06-03 at 13:03 +0200, Pavel Stehule wrote:
>>> this patch add support of table functions syntax like ANSI SQL 2003.
>
>> I'm not necessarily opposed to this, but I wonder if we really need
>> *more* syntax variants for declaring set-returning functions.
>
> I've been saying right along that we don't. The proposed patch adds
> no measurable new functionality; its only reason to live is standards
> compliance, and I'm not convinced that's worth the confusion. Our
> implementation of functions is (and always will be) far enough away
> from the standard that notational issues like this are hardly the top
> of the problem list for someone wishing to import a spec-compliant
> function.

a) current syntax is strange for beginers (and I am sure - isn't nice)
- look to mailing lists. I belive so ansi syntax is better.

b) it's needed for well SQL/PSM support. With table functions and
RETURN QUERY we are neer standard.

>
> (It's also worth asking where the import is coming from. Who implements
> the spec syntax anyway? DB2 maybe, but when was the last time we heard
> from anyone trying to migrate from DB2 to PG?)
>

lot of smaller new databases respect ANSI SQL 200x well - not only db2

> regards, tom lane
>

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] a question about exec_simple_query()

"=?GB2312?Q?=CD=F4=E7=F9?=" <billywq@163.com> writes:
> In other words, what's the benifit we use a portal to run a simple query?

er, because it doesn't work otherwise?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] turn off auto index for foreign key

you are right. I double checked. So, is there any way to turn off auto index creation for the parent? (the best is changing postgresql database setting)
I am using postgres (PostgreSQL) 8.1.9 on CentOS 5 x86_64

On Tue, Jun 10, 2008 at 11:50 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On Mon, Jun 9, 2008 at 8:48 PM, Tommy Cheng <csckcac@gmail.com> wrote:
> Hi admins,
>   I would like to turn off the auto index for child table's foreign key. It
> is because i want to read the information_schema for table, column, index,
> unique and so on after creating the database. How to turn off the auto
> behavior? Or Is there a way to distinguish between auto one and user created
> index?

There is no auto index creation for child relationships, only for the
parent (unique index for primary key etc)

Re: [HACKERS] Proposal: GiST constraints

On Mon, 2008-06-09 at 21:00 -0400, Tom Lane wrote:
> 1. It's btree-specific and can't be shared by other index AMs that might
> wish to implement constraints.
>

This can be solved by my proposal, but I just don't know how it would
apply to something like GIN, for instance. It could replace the unique
constraint for BTree, but I'm not sure it would perform as well. It's
not that my proposal is GiST-specific, it's just that is the only use
case I can think of that is an improvement.

> 2. It involves the index AM reaching into the heap, which is at the
> least a serious failure of modularity.

We need to reach into the heap for visibility information, if we're to
implement any constraints at all. Also, we have to test against values
being inserted by other concurrent transactions, and those values can
be variable in size. What other mechanism do we have to share those
variable-sized values among several backends?

> 3. There's no way to implement a deferred uniqueness check, nor even to
> handle the within-statement conflict problem.

This is the big one.

> Now admittedly I don't have a solution that addresses these objections
> (much less one that does it without losing any performance) but I'm
> hesitant to see us building new features in this area without any idea
> how we will fix these things --- especially #3, which is a SQL-spec
> violation as well as a frequent user complaint. I'd like to have at
> least a design plan for fixing these things, so we know whether we are
> painting ourselves (further) into a corner.

I'll see if I can come up with something. I agree that's an important
problem to solve. Does anyone know how other DBMSs do this? I found this
thread from the TODO:

http://archives.postgresql.org/pgsql-hackers/2006-09/msg01458.php

Regards,
Jeff Davis


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] turn off auto index for foreign key

On Mon, Jun 9, 2008 at 8:48 PM, Tommy Cheng <csckcac@gmail.com> wrote:
> Hi admins,
> I would like to turn off the auto index for child table's foreign key. It
> is because i want to read the information_schema for table, column, index,
> unique and so on after creating the database. How to turn off the auto
> behavior? Or Is there a way to distinguish between auto one and user created
> index?

There is no auto index creation for child relationships, only for the
parent (unique index for primary key etc)

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [pgus-board] It's time for final review

Ah, yes; right. ;)

---Michael Brewer
mbrewer@gmail.com

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [pgus-board] It's time for final review

Joshua D. Drake wrote:
>
> On Fri, 2008-06-06 at 15:29 -0700, Selena Deckelmann wrote:
>> First, thank you, JD, for laying this all out. It makes it much
>> easier to discuss.

> It also means we only have elections once every two years and once every
> two years only ~ 50% of the board is being replaced.

Correction:

After the initial term of the founders, we would have elections every 12
months but it would only effect ~ 50% of the board.

Sincerely,

Joshua D. Drake

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [pgus-general] Is starting PGUS realistic?

Josh,

> Selena, Michael and I are currently discussing all components of the
> comments we receive. We stopped on Friday and are set to take it up
> again this week.

Is there a particular reason why you're not discussing anything on this list?
I thought that PGUS was supposed to be a *community* association.

--
Josh Berkus
PostgreSQL @ Sun
San Francisco

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [HACKERS] Core team statement on replication in PostgreSQL

Just for information.

In terms of archive compression, I have archive log compression which
will be found in http://pgfoundry.org/projects/pglesslog/

This feature is also included in NTT's synchronized log shipping
replication presented in the last PGCon.

2008/6/10 Greg Smith <gsmith@gregsmith.com>:
> On Mon, 9 Jun 2008, Tom Lane wrote:
>
>> It should also be pointed out that the whole thing becomes uninteresting
>> if we get real-time log shipping implemented. So I see absolutely no
>> point in spending time integrating pg_clearxlogtail now.
>
> There are remote replication scenarios over a WAN (mainly aimed at disaster
> recovery) that want to keep a fairly updated database without putting too
> much traffic over the link. People in that category really want zeroed
> tail+compressed archives, but probably not the extra overhead that comes
> with shipping smaller packets in a real-time implementation.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
------
Koichi Suzuki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Imagenes + Java

El lun, 09-06-2008 a las 06:04 -0700, Cristian Montenegro escribió:
> Hola, que tal?
> queria saber si en postgres es posible almacenar imagenes en postgres,
> si es así cómo se haría??
> Y por ultimo si alguien lo hizo usando Java tambien si tienen algo de
> ejemplo, les agradecería.
>
> Saludos:
>
> Cristian Matias Montenegro
>
>
> ______________________________________________________________________
>
>
>
> __________________________________________________
> Correo Yahoo!
> Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
> ¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar

Basicamente la imagen se almacena en :

byte[] imageBytes;

y se lee / graba utilizando los metodos de ResultSet getBytes y
updateBytes

Saludos,
Diego.


--
TIP 4: No hagas 'kill -9' a postmaster

Re: [pgus-board] It's time for final review

I'd be fine with that.

---Michael Brewer
mbrewer@gmail.com

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [HACKERS] Core team statement on replication in PostgreSQL

On Mon, 9 Jun 2008, Tom Lane wrote:

> It should also be pointed out that the whole thing becomes uninteresting
> if we get real-time log shipping implemented. So I see absolutely no
> point in spending time integrating pg_clearxlogtail now.

There are remote replication scenarios over a WAN (mainly aimed at
disaster recovery) that want to keep a fairly updated database without
putting too much traffic over the link. People in that category really
want zeroed tail+compressed archives, but probably not the extra overhead
that comes with shipping smaller packets in a real-time implementation.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] turn off auto index for foreign key

Hi admins,
  I would like to turn off the auto index for child table's foreign key. It is because i want to read the information_schema for table, column, index, unique and so on after creating the database. How to turn off the auto behavior? Or Is there a way to distinguish between auto one and user created index?
  Thanks
Tommy Cheng

[HACKERS] a question about exec_simple_query()

Hi, everyone:

In functin exec_simple_query(), why we run a simple query wrapped in a portal.

For instance:

version 8.3.0 ,postgres.c, Line 908

/*
* Create unnamed portal to run the query or queries in. If there
* already is one, silently drop it.
*/
portal = CreatePortal("", true, true);

......


In other words, what's the benifit we use a portal to run a simple query?


Thanks for your help! :)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] SQL: table function support

Tom Lane escribió:

> (It's also worth asking where the import is coming from. Who implements
> the spec syntax anyway? DB2 maybe, but when was the last time we heard
> from anyone trying to migrate from DB2 to PG?)

Sourceforge?

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] SQL: table function support

Neil Conway <neilc@samurai.com> writes:
> On Tue, 2008-06-03 at 13:03 +0200, Pavel Stehule wrote:
>> this patch add support of table functions syntax like ANSI SQL 2003.

> I'm not necessarily opposed to this, but I wonder if we really need
> *more* syntax variants for declaring set-returning functions.

I've been saying right along that we don't. The proposed patch adds
no measurable new functionality; its only reason to live is standards
compliance, and I'm not convinced that's worth the confusion. Our
implementation of functions is (and always will be) far enough away
from the standard that notational issues like this are hardly the top
of the problem list for someone wishing to import a spec-compliant
function.

(It's also worth asking where the import is coming from. Who implements
the spec syntax anyway? DB2 maybe, but when was the last time we heard
from anyone trying to migrate from DB2 to PG?)

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Core team statement on replication in PostgreSQL

Bruce Momjian <bruce@momjian.us> writes:
> Agreed. I realize why we are not zeroing those bytes (for performance),
> but can't we have the archiver zero those bytes before calling the
> 'archive_command'?

The archiver doesn't know any more about where the end-of-data is than
the archive_command does. Moreover, the archiver doesn't know whether
the archive_command cares. I think the separate module is a fine
solution.

It should also be pointed out that the whole thing becomes uninteresting
if we get real-time log shipping implemented. So I see absolutely no
point in spending time integrating pg_clearxlogtail now.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

Bruce Momjian <bruce@momjian.us> writes:
> Gurjeet Singh wrote:
>> There could be multiple slaves following a master, some serving

> For the slave to not interfere with the master at all, we would need to
> delay application of WAL files on each slave until visibility on that
> slave allows the WAL to be applied, but in that case we would have
> long-running transactions delay data visibility of all slave sessions.

Right, but you could segregate out long-running queries to one slave
server that could be further behind than the others.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

Bruce Momjian wrote:

> Agreed. I realize why we are not zeroing those bytes (for performance),
> but can't we have the archiver zero those bytes before calling the
> 'archive_command'?

Perhaps make the zeroing user-settable.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

Andreas 'ads' Scherbaum wrote:
> On Fri, 30 May 2008 16:22:41 -0400 (EDT) Greg Smith wrote:
>
> > On Fri, 30 May 2008, Andreas 'ads' Scherbaum wrote:
> >
> > > Then you ship 16 MB binary stuff every 30 second or every minute but
> > > you only have some kbyte real data in the logfile.
> >
> > Not if you use pg_clearxlogtail (
> > http://www.2ndquadrant.com/replication.htm ), which got lost in the giant
> > March commitfest queue but should probably wander into contrib as part of
> > 8.4.
>
> Yes, this topic was discussed several times in the past but to
> solve this it needs a patch/solution which is integrated into PG
> itself, not contrib.

Agreed. I realize why we are not zeroing those bytes (for performance),
but can't we have the archiver zero those bytes before calling the
'archive_command'?

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[PATCHES] Tentative patch for making DROP put dependency info in DETAIL

There was some discussion a few days ago about making dependency.c emit
dependency reports in the same style that pg_shdepend.c does, viz
a lot of DETAIL lines on a single message instead of separate NOTICE
messages. Attached is a tentative patch that does that. See the
regression-test diffs for samples of what the output looks like.

I'm not entirely sure whether I like the results better than what
we have. Opinions anyone? There are some cases where it seems
clearly better, eg the sequence.out changes, but in a lot of others
it doesn't seem much better.

One particular case of interest is in truncate.out, where the
table-at-a-time implementation of DROP TABLE is clearly exposed
by the fact that you get multiple NOTICEs. I wonder if it would
be worth refactoring the code so that a multiple-object DROP is
implemented via performMultipleDeletions(). This would have more
than just cosmetic advantages: it would no longer matter what
order you listed the tables in. But the refactoring required looks
bigger and more tedious than I want to tackle right now.

I also want to note that we've historically had the code report
auto-cascade drops as DEBUG2 messages. I tried to merge those reports
into the main one but it was a complete mess :-( because the client and
server-log messages might have different numbers of entries depending on
their log-level settings. Almost the first case I tried favored me with
NOTICE: drop cascades to 0 other object(s)
DETAIL:
reported to the client (with the server log of course saying something
different). So I gave up and left that behavior separate.

Comments? Should we do this, or leave things alone?

regards, tom lane

Re: [pgus-board] It's time for final review

On Fri, 2008-06-06 at 15:29 -0700, Selena Deckelmann wrote:
> First, thank you, JD, for laying this all out. It makes it much
> easier to discuss.

The simplest solution I can come up with is:

Election in October 08 for four two year terms
Board members who are elected in October 08 take their seat at the
Annual meeting.

Annual meeting takes place in March.

Founders also have two year terms.

Which means:

Founders are up for re-election October 2010 with their seats being
replaced or renewed March 2010

The four new members are up for re-election October 2011 because they
don't take their seats until March 2009.

It also means we only have elections once every two years and once every
two years only ~ 50% of the board is being replaced.

Further, officers keep their seats for the length of their term. If they
are re-elected, board elects new officers (which of course may be the
same people).

I see this as the simplest and most effective solution. I know that we
think one year terms are good but I really don't agree. I do think 3
year terms are probably too long (which is what SPI has) but a year goes
by really fast and I would like to think that we are going to have long
term projects to have to follow up on etc...


Sincerely,

Joshua D. Drake

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [HACKERS] Core team statement on replication in PostgreSQL

Gurjeet Singh wrote:
> On Fri, May 30, 2008 at 10:40 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> > But since you mention it: one of the plausible answers for fixing the
> > vacuum problem for read-only slaves is to have the slaves push an xmin
> > back upstream to the master to prevent premature vacuuming. The current
> > design of pg_standby is utterly incapable of handling that requirement.
> > So there might be an implementation dependency there, depending on how
> > we want to solve that problem.
> >
>
> I think it would be best to not make the slave interfere with the master's
> operations; that's only going to increase the operational complexity of such
> a solution.
>
> There could be multiple slaves following a master, some serving

For the slave to not interfere with the master at all, we would need to
delay application of WAL files on each slave until visibility on that
slave allows the WAL to be applied, but in that case we would have
long-running transactions delay data visibility of all slave sessions.

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] issue with xml functionality and

Belbin, Peter wrote:
> I'm hoping this is the right place to ask.
>
> If not, please, kindly direct me to the right place.
>
> I'm having an issue with xml functionality.
>
> The issue is that the xml value has the following (large chunks have been removed, but this should be enough to demonstrate the issue):
>
> <?xml version="1.0" encoding="UTF-8"?>
> <!DOCTYPE workflow PUBLIC "-//OpenSymphony Group//DTD OSWorkflow 2.8//EN" "
http://www.opensymphony.com/osworkflow/workflow_2_8.dtd">
> <workflow>
> <meta name="lastModified">Thu Feb 21 13:20:46 CST 2008</meta>
> <meta name="generator">OSWorkflow Designer</meta>
> </workflow>

I think this is related to this TODO item:

* Allow XML to accept more liberal DOCTYPE specifications

http://archives.postgresql.org/pgsql-general/2008-02/msg00347.php

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgus-general] Is starting PGUS realistic?

On Mon, 2008-06-09 at 18:02 -0700, Josh Berkus wrote:
> Josh, Michael, Selena,
>
> Most of us know from SPI and previous attempts at PostgreSQL incorporation
> that running an NPO requires a pretty significant time commitment. Yet,
> the bylaws thread and the pgus web site have languished without any
> regular attention from the three of you because you're busy with your
> jobs.

JoshB,

Selena, Michael and I are currently discussing all components of the
comments we receive. We stopped on Friday and are set to take it up
again this week.


> I'm wondering if it's actually realistic to launch PGUS if we can't even
> get it together to get the bylaws finalized. Starting another PostgreSQL
> NPO which will be inactive due to lack of volunteer time is not a good way
> for any of us to contribute.

That is certainly a valid concern but I don't think it applies here as
everyone is working very hard. It would be helpful to everyone involved
if you would stop making assumptions about a situation.

Sincerely,

Joshua D. Drake

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [pgus-general] Is starting PGUS realistic?

Hi Josh,

On Mon, Jun 9, 2008 at 6:02 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Most of us know from SPI and previous attempts at PostgreSQL incorporation
> that running an NPO requires a pretty significant time commitment. Yet,
> the bylaws thread and the pgus web site have languished without any
> regular attention from the three of you because you're busy with your
> jobs.

It isn't accurate to say that it is languishing. Sure, the blog hasn't
been updated, but to be honest, most non-profits don't even have
blogs! Not a fair measure.

There's been a lot of work, it has not been published on the -general list.

Over the next week, JD, Michael and I can weigh in here on changes to
the bylaws being made right now, that have incorporated the feedback
we've gotten from you and others.

I will post my opinions this evening. JD may beat me to it, so I
won't say that I'll 'kick things off' :)

-selena

--
Selena Deckelmann
United States PostgreSQL Association - http://www.postgresql.us
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

[pgus-general] Is starting PGUS realistic?

Josh, Michael, Selena,

Most of us know from SPI and previous attempts at PostgreSQL incorporation
that running an NPO requires a pretty significant time commitment. Yet,
the bylaws thread and the pgus web site have languished without any
regular attention from the three of you because you're busy with your
jobs.

I'm wondering if it's actually realistic to launch PGUS if we can't even
get it together to get the bylaws finalized. Starting another PostgreSQL
NPO which will be inactive due to lack of volunteer time is not a good way
for any of us to contribute.

--
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [HACKERS] Proposal: GiST constraints

Jeff Davis <pgsql@j-davis.com> writes:
> On Mon, 2008-06-09 at 13:28 -0400, Tom Lane wrote:
>> I would like to see something that replaces the current btree-only kluge
>> for UNIQUE, if we're going to try to do something "general". IOW, don't
>> think of this as GiST-specific.

> I'm not sure exactly what you have in mind when you say "kludge".

Well, there are at least three things not to like about the btree UNIQUE
implementation:

1. It's btree-specific and can't be shared by other index AMs that might
wish to implement constraints.

2. It involves the index AM reaching into the heap, which is at the
least a serious failure of modularity.

3. There's no way to implement a deferred uniqueness check, nor even to
handle the within-statement conflict problem.

It looks to me like the same knocks can be laid on your proposal.

Now admittedly I don't have a solution that addresses these objections
(much less one that does it without losing any performance) but I'm
hesitant to see us building new features in this area without any idea
how we will fix these things --- especially #3, which is a SQL-spec
violation as well as a frequent user complaint. I'd like to have at
least a design plan for fixing these things, so we know whether we are
painting ourselves (further) into a corner.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Bottom Posting

Andrej Ricnik-Bay wrote:
> On 29/05/2008, Bob Pawley <rjpawley@shaw.ca> wrote:
>
> > ... get their point across up front without making me wade through
> > previous posts which I have already read.
> Good for you :}
>
> > I can understand the concept of bottom posting
> No one advocates bottom-posting here. It's all about intersparsed
> with relevant bits left standing.
>
>
> > The concept of most lists should be "the free exchange of ideas in the most
> > efficient manner possible".
> Which is per agreement on the list intersparsed. Which also allows people
> only just hopping onto the train of thought to get a good understanding
> of what a thread is about without having to read the lot top to bottom.
> May not match your individual preference, but then that's not what the
> list is about, either.

Don't forget adding a blank line between quoted text and your reply;
that is helpful too.

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] Proposal: pulling newsbytes from www.pgfoundry.org

On Mon, 2008-06-09 at 17:57 -0700, Josh Berkus wrote:
> Josh,
>
> > Only really interested if someone puts together an ATOM feed of the
> > news.
>
> You're aware of the limitations of Gforge. Be realistic.

I am :) you already brought up a direct database pull. We can just
change that to:

A cron that creates an ATOM file from a direct database pull.

Joshua D. Drake


>


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-www] Proposal: pulling newsbytes from www.pgfoundry.org

Josh,

> Only really interested if someone puts together an ATOM feed of the
> news.

You're aware of the limitations of Gforge. Be realistic.

--
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [PATCHES] SQL: table function support

On Tue, 2008-06-03 at 13:03 +0200, Pavel Stehule wrote:
> this patch add support of table functions syntax like ANSI SQL 2003.

I'm not necessarily opposed to this, but I wonder if we really need
*more* syntax variants for declaring set-returning functions. The
existing patchwork of features is confusing enough as it is...

-Neil

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [ADMIN] Connecting from xp to linux server

Simone Gadenz wrote:
> Shane,
>
> I attached the pg_hba.conf I have on the server. I can use pgadmin3 when
> I connect from the server itself, so I think the pg_hba on the server is
> correct.
>
> From the client machine I cannot connect through pgadmin3. Is there
> another way to test connection client server? Has pg a textual client to
> send sql statements?
>
> Thanks a lot
>
> Simone
>
>

# connections for all the pcs on the subnet
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all 192.168.148.0 255.255.255.0
host all all 192.168.159.0 255.255.255.0


try adding the method to the end of these lines.


# METHOD can be "trust", "reject", "md5", "crypt", "password", "gss",
"sspi",
# "krb5", "ident", "pam" or "ldap". Note that "password" sends passwords
# in clear text; "md5" is preferred since it sends encrypted passwords.


--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] pg_dump restore time and Foreign Keys

Simon Riggs wrote:

> Maybe we say that you can defer the check, but after a while autovacuum
> runs it for you if you haven't done so. It would certainly be useful to
> run the VALIDATE part as a background task with vacuum wait enabled.

It would be useful if there was anywhere to report the error to, or an
action that could be taken automatically.

--
Alvaro Herrera

http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] Proposal: pulling newsbytes from www.pgfoundry.org

On Mon, 2008-06-09 at 15:39 -0700, Josh Berkus wrote:
> WWW team,
>
> In an effort to make PostgreSQL accessory projects more visible, I'd like
> to start including the newsbytes from the pgfoundry main page in the
> www.PostgreSQL.org news ticker. Probably the best way to do this is
> direct database pull. Anyone have time to hack something up?

Only really interested if someone puts together an ATOM feed of the
news.

Joshua D. Drake

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

[COMMITTERS] pgsql: Make XML building work silently on VPATH builds (untested on

Log Message:
-----------
Make XML building work silently on VPATH builds (untested on regular builds).

Modified Files:
--------------
pgsql/doc/src/sgml:
Makefile (r1.105 -> r1.106)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/Makefile?r1=1.105&r2=1.106)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [PERFORM] Performance of aggregates over set-returning functions

"John Smith" <sodgodofall@gmail.com> writes:
> Sorry for the long delay in following up on this suggestion. The
> change Tom suggested fixed the performance problems I was seeing, but
> I never ran the full regression suite on the modified code, as
> everything in my performance tests seemed to indicate the bug was
> fixed (i.e, no errors even with --cassert-enabled). When I recently
> ran the regression suite on the "fixed" version of Postgres, the
> "misc" test suite fails with the following error message: "ERROR:
> cache lookup failed for type 2139062143". Is this a manifestation of
> the problem where certain items are being freed too early?

Yup --- something's trying to use memory that's already been freed.
The particular case is evidently a field containing a type OID.

You might be able to get a clue where the problem is by getting a gdb
stack trace back from errfinish(), but this sort of kills my optimism
that the 7.0-era problem is gone ...

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] pg_dump restore time and Foreign Keys

On Mon, 2008-06-09 at 14:07 -0400, Alvaro Herrera wrote:
> Simon Riggs wrote:
>
> > If we break down the action into two parts.
> >
> > ALTER TABLE ... ADD CONSTRAINT foo FOREIGN KEY ... NOVALIDATE;
> > which holds exclusive lock, but only momentarily
> > After this runs any new data is validated at moment of data change, but
> > the older data has yet to be validated.
> >
> > ALTER TABLE ... VALIDATE CONSTRAINT foo
> > which runs lengthy check, though only grabs lock as last part of action
>
> The problem I see with this approach in general (two-phase FK creation)
> is that you have to keep the same transaction for the first and second
> command, but you really want concurrent backends to see the tuple for
> the not-yet-validated constraint row.

Well, they *must* be in separate transactions if we are to avoid holding
an AccessExclusiveLock while we perform the check. Plus the whole idea
is to perform the second part at some other non-critical time, though we
all agree that never performing the check at all is foolhardy.

Maybe we say that you can defer the check, but after a while autovacuum
runs it for you if you haven't done so. It would certainly be useful to
run the VALIDATE part as a background task with vacuum wait enabled.

> Another benefit that could arise from this is that the hypothetical
> VALIDATE CONSTRAINT step could validate more than one constraint at a
> time, possibly processing all the constraints with a single table scan.

Good thought, though not as useful for FK checks.

--
Simon Riggs

www.2ndQuadrant.com

PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

Greg,

> Speak to the statisticians. Our sample size is calculated using the same
> theory behind polls which sample 600 people to learn what 250 million
> people are going to do on election day. You do NOT need (significantly)
> larger samples for larger populations.

Your analogy is bad. For elections, the voters have only a few choices.
In a 300 million row table, there could be 300 million different values,
and the histogram becomes less accurate for every order of magnitude
smaller than 300 million it is.

> Also, our estimates for n_distinct are very unreliable. The math behind
> sampling for statistics just doesn't work the same way for properties
> like n_distinct. For that Josh is right, we *would* need a sample size
> proportional to the whole data set which would practically require us to
> scan the whole table (and have a technique for summarizing the results
> in a nearly constant sized data structure).

Actually, a number of papers have shown block-based algorithms which can
arrive a reasonably confident (between 50% and 250% of accurate) estimates
based on scanning only 5% of *blocks*. Simon did some work on this a
couple years ago, but he and I had difficultly convincing -hackers that a
genuine problem existed.

You're correct that we'd need to change pg_statistic, though. For one
thing, we need to separate the sample size from the histogram size.

Also, we seem to be getting pretty far away from the original GUC
discussion.

--
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal: add window function to 8.4

On Jun 9, 2008, at 7:32 AM, H.Harada wrote:
> This topic has been discussed on this list and many user expect that
> PostgreSQL implements it.
> I'd like to work on this feature and hope that we can include it on
> 8.4.


I can't really comment on the technical aspects of your proposal, but
yes, please, windowing functions would be great to have even if not
fully implemented.
--
Decibel!, aka Jim C. Nasby, Database Architect decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [HACKERS] pg_dump restore time and Foreign Keys

On Jun 7, 2008, at 2:00 PM, Andrew Dunstan wrote:
> If we go down this road then I would far rather we tried to devise
> some safe (or semi-safe) way of doing it instead of simply
> providing expert (a.k.a. footgun) mode.
>
> For instance, I'm wondering if we could do something with checksums
> of the input lines or something else that would make this difficult
> to do in circumstances other than pg_restore.


Yes, but that provides no help at all outside of pg_dump. Being able
to add a FK with NO CHECK would be tremendously useful outside of
pg_dump. Actually, in the interest of stating the problem and not the
solution, what we need is a way to add FKs that doesn't lock
everything up to perform the key checks. Perhaps there is some semi-
safe way that the constraint could be added and the checks done in
the background...

As for the footgun aspect, are we the enterprise-class OSS database
or the one that caters itself to noobs that will go out of their way
to make life hard on themselves? I'm all in favor of not adding
footguns that don't have value, but this one holds a lot of value for
anyone trying to maintain a large database in a 24/7 environment. To
put this in perspective, the amount of revenue we would loose from
adding just one FK to one of our larger tables would more than cover
paying someone to develop this feature.
--
Decibel!, aka Jim C. Nasby, Database Architect decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [HACKERS] TODO, FAQs to Wiki?

Alvaro Herrera wrote:
> Actually, now that I try it, it seems that the MediaWiki markup is not
> completely helpful here -- right now, on some items we have a one-line
> "header" and then possibly a longer description, and it seems the only
> way to do that in MediaWiki is like this:
>
> * Set proper permissions on non-system schemas during db creation<br> Currently all schemas are owned by the super-user because they are copied from the template1 database. However, since all objects are inherited from the template database, it is not clear that setting schemas to the db owner is correct.
>
> Note the dumb <br> thing in the middle.
>
> Personally I find that ugly enough as to be unacceptable; what do others
> think?

How about using "definition lists"?


//Magnus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsnap - pgsnap: Bugfix : Toggle function on a checkbox had disabled the

Log Message:
-----------
Bugfix : Toggle function on a checkbox had disabled the checkbox's ability
to check on and off.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.57 -> r1.58)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.57&r2=1.58)
pgsnap/pgsnap/template:
header.template.html (r1.7 -> r1.8)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/template/header.template.html.diff?r1=1.7&r2=1.8)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[HACKERS] proposal: add window function to 8.4

This topic has been discussed on this list and many user expect that
PostgreSQL implements it.
I'd like to work on this feature and hope that we can include it on 8.4.

Former discussions are here:

http://archives.postgresql.org/pgsql-hackers/2004-11/msg01093.php

http://archives.postgresql.org/pgsql-hackers/2007-01/msg00861.php


How it works and examples:
SELECT dept, empno,
RANK() OVER(PARTITION BY dept ORDER BY age) as age_rank,
RANK() OVER(PARTITION BY dept ORDER BY salary) as salary_rank,
SUM(salary) OVER(PARTITION BY dept ORDER BY age) as run_total
FROM employees ORDER BY 1, 3, 4;

dept empno age_rank salary_rank run_total
ENG 2 1 2 40000
ENG 1 2 1 90000
QA 3 1 2 30000
QA 4 2 1 65000
(ref.: http://www.gavinsherry.org/talks/window_talk.pdf)


My current idea and concept:
- add "window function" and treat it specially such like aggregate
function and setof function.
- some features may be dropped at the first release, considering to
support them later.
- to formulate and to let it work properly are primary, performance
optimization is secondary.


From my survey around web and list archive, the points are:
- what is "window function" rank(), rank_dense(), lead() and others?
First of all, we should define the window function such like
aggregate function. In my opinion, there are two types of functions in
OVER () call context. One of them is aggregate, and the other is
window (ranking) function. Sum() in a query like

SELECT empno, sum(salary) OVER (PARTITION BY depno) FROM empsalary;

is obviously aggregate function. This type of function can be used as
it is now. Only executer will change its behavior.
Current pgsql feature sets lack window function like rank(). This
type of function must 1) have a context such as SETOF functions, 2)
return values until executor says "DONE", rather than function itself
says "DONE" as in SETOF function, and 3) know about other tuples
(mainly ORDER BY clause), as rank() doesn't take any arguments but
knows the ranking boundary. I suppose that pgsql have new function
system for these types called "window function".
Once we can define window function, users have extensibility to this
type of function.

- do we really need FRAME clause?
From my survey, Oracle and DB2 have FRAME clause

SELECT empno, sum(salary) OVER (ORDER BY empno ROWS BETWEEN
UNBOUNDED PRECEDING AND CURRENT ROW) FROM empsalary;

while MS SQL Server doesn't (note that the literal from "ROWS" to
"CURRENT ROW" is called FRAME clause). To implement FRAME clause is
much more complicated than only PARTITION and ORDER clause support
because we need random access to result tuples. Though we will be
expected to support FAME clause in long-term, for the first release it
might be able to be put off. Even though rank() doesn't support FRAME
clause (only PARTITION and ORDER) it is so useful, more than now at
least.

- execution order
In SQL:2003, the execution order is defined as

where -> group by -> having -> (windowing) * N -> order by (outer,
currently existing one)
where windowing is
partition by -> order by -> (framing) * N

But Oracle seems that it has another order

(windowing) * N -> where -> group by ... and so on.

which is better for us? With Oracle's one you can say
SELECT empno, rank() OVER (PARTITION BY depno ORDER BY saraly) AS
topsalary FROM empsalary
WHERE topsaraly < 3
to get the top 3 people taking heighest salary. In the SQL standard,
you should the nest query.
I insist the first (standard) one is better because we may want use
the result of normal aggregate in OVER clause.

- plan and node
Currently in my mind the execution strategy could be:

1. Where & GroupBy & Having
|
2. SortBy partitionClause, orderByClause
|
3. Window
foreach partition:
if not there_is_frame():
aggvalue = null
foreach row in partition:
aggvalue = agg_trans_func(aggvalue)
aggvalue = agg_final_func(aggvalue)

foreach row in partition:
if has frame clause:
aggvalue = null
frame = make_frame()
foreach row_in_frame:
aggvalue = aggregate_trans_func(aggvalue)
aggvalue = aggregate_final_func(aggvalue)

set aggvalue to row
val = window_func()
set val to row
goto 2. if another window remained
|
4. SortBy ORDER BY clause (outer) & Limit
|
5. Output

This pseudo code is quite simple and stupid. We may optimize it by
splitting tasks with MergeJoin, etc. or think about process 2. that
collect same PARTITION clauses to reduce sort operation. Or to use
Hash Agg to create PARTITION. But let's be stupid at first.
Optimization is secondary.


References:
description by Stefan DeBloch
http://wwwdvs.informatik.uni-kl.de/courses/NEDM/WS0607/Vorlesungsunterlagen/NEDM.Chapter.06.Windows_and_Query_Functions_in_SQL.pdf

via Wikipedia[Select (SQL)] http://en.wikipedia.org/wiki/Select_(SQL)


BTW, what about Bizgres now?

I appreciate for any comments and dis -:)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsnap - pgsnap: Bugfix : Toggle function on a checkbox had disabled the

Log Message:
-----------
Bugfix : Toggle function on a checkbox had disabled the checkbox's ability to
check on and off.

Tags:
----
REL0_3_STABLE

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.33.2.10 -> r1.33.2.11)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.33.2.10&r2=1.33.2.11)
pgsnap/pgsnap/template:
header.template.html (r1.5.2.1 -> r1.5.2.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/template/header.template.html.diff?r1=1.5.2.1&r2=1.5.2.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[ADMIN] Connecting from xp to linux server

Hi all,

this is the problem I am facing.

I have PG 8.3 running on a KUBUNTU installation running on a virtual
machine. From inside the virtual machine I can use PGADMIN to manage PG
but when I try to connect from the xp machine I have this error: "Error
connecting to the server:FATAL: missing or erroneous pg_hba.conf file" .

I set the permission in the pg_hba.conf on the server.

The machine talk each oher, I can ping and use telnet on the 5432 port.

The log on the server does not report any problem.

Ideas?

--


Dr. Simone Gadenz

via P. Togliatti 69, 50051

Castelfiorentino (FI)

+39 339 6053660

*INFORMATIVA PRIVACY (ex D.lgs. 196/03)*

Le inormazioni contenute in questo documento e nei relativi allegato

possono essere riservate e sono destinate esclusivamente alla persona

od alla Società indicata come destinatario.

La diffusione e la distribuzione del presente documento a soggetti

diversi da quelli indicati, od in genere qualsivoglia utilizzo illecito

dei dati ivi contenuti, e proibita sia ai sensi dell'art. 616 del

Codice Penale che dal D.Lgs 196/03 in materia di protezione dei dati

personali (Privacy).

Se avete ricevuto per errore questo documento siete pregati di

distruggerlo e di comunicarcelo prontamente tramite e-mail o fax.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [SQL] Parallel updates on multiple cores

Andrei wrote:

> The function above updates the rows between the ids start_id and
> end_id. I have a quad core procesor so i run two separate connections
> to the database: select populate_test_data(5000,1) and another select
> populate_test_data(5000,2). In this case each function runs on one
> core doing the inserts in parallel, but when i try to run select
> select_unprocessed(1,5001) and from another connection select
> select_unprocessed(5001, 10001), one of the processes locks the table
> so the other one has to wait until the table is unlocked. Each
> process updates different parts of the table. Is there a way to do
> the updates in parallel on multiple cores?
>

Wait until the other is done or wait until it has done what it needs to?

If it appears to not update the records I would look at the id ranges
you are passing. You insert 5000 rows with the first function then you
tell the update function to update row id's 1 to 50001 - have you reset
the sequence for the id column? or do you drop and create the table
before each test? My guess is no updates appear to happen as the id's
entered by the serial type are larger than 10000.

Also you update with processed='n' - is that what you want? Is that the
only column you look at to see that it is done?


Transactions would be the only cause of the problem you describe. I am
guessing that you use bigger numbers than 5000 in your tests and the
examples above use overlapping id's. If the first updates row 5001 early
then the second may need to wait until it commits to update it again.
This can work the other way 5001 is updated by the second locking it
until it finishes and the first waits until the second commits to update
it again.
With 5000 rows I wouldn't expect to see a time difference.

Without an order by in the select the rows can be returned and updated
in any order.


--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[COMMITTERS] pginstaller - pginst: Prep for release

Log Message:
-----------
Prep for release

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pginst/package:
README.TXT (r1.7.2.2 -> r1.7.2.3)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pginstaller/pginst/package/README.TXT.diff?r1=1.7.2.2&r2=1.7.2.3)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[pgsql-es-ayuda] pasar array en pl/pgsql

Hola, estoy comenzando con pl/pgsql y no consigo crear un array de 2
dimensiones, cargarle datos, ni no tampoco pasarlo de una función a otra.

Declare el array a si :

prueba numeric[][];


pretendo cargar le datos mediante dos variable que voy aumentando :

prueba[x][y] := variable_numeric

y luego quiero retornar el array :

return tabla;

y como llamar a esta función y retornar el array a otra

__________ Information from ESET NOD32 Antivirus, version of virus signature
database 3167 (20080609) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

Re: [GENERAL] fitler database list

Christophe Simonis wrote:
> Is it a way to filter the database list on the tables containt in the
> database ?

Yes it is.

Cheers
Tino :-)

Re: [HACKERS] libpq support for arrays and composites

>That makes it quite useless for my intended purpose.

I found no more use cases for text results after libpqtypes started to
take shape, eventhough libpqtypes supports all data types in text &
binary excluding arrays and composites. Because of this, adding a text
parser for arrays and composites felt like a lot of work for a little
gain. libpqtypes is really designed to be a binary interface. The text
support offered allows existing applications to use the new interface
with results generated by PQexec(), meaning you can use PQgetf w/o
having to change code to use PQputf().

If you take another glance at libpqtypes, you may see that result format
decisions are pretty well abstracted and there really is no need for
text results anymore (okay, I'll catagorize that as an opinion).

> I also am not particularly enamoured of the libpqtypes way of doing
> things, which feels rather foreign to me.

Not sure we can fix this issue. We made every attempt to keep things
familiar ... printf/scanf style. It's a new approach for libpq but an
old one for C hacks.

--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-fr-generale] Problème de select suivant un update

Guillaume Lelarge wrote:
> Je viens de faire un petit tableau sur les neuf tests que j'ai
> finalement fait (les nombres sont des secondes) :
>
> UPDATE ANALYZE CHECKPOINT SELECT SELECT
> Test 1 992 292 0,02
> Test 2 945 27,31 13,7 275,52 0,02
> Test 3 942 33 22 724 2,14
> Test 4 946 31 29 709 2,5
> Test 5 924 47 21 644 0,3
> Test 6 942 33 9,8 699 1,3
> Test 7 953 33,8 9,99 683 1
> Test 8 810 26 99 23 8
> Test 9 869 32 9,7 375 45
>

> Il est tout à fait étonnant que je multiplie par 3 le temps pour le
> premier SELECT entre les tests 2 et 3 (la seule différence entre les
> deux est la config où j'ai passé les checkpoint_segments de 3 à 30).
>
> On remarque que le temps d'exécution de l'UPDATE ne varie presque
pas,
> sauf sur les tests 8 et 9 (particularité de ces tests, un
> effective_cache_size passé de 128 Mo, valeur par défaut, à 1 Go)
>
> Le test intéressant est le 8.
>
> Différence de config entre test 7 et test 8 :
> * passage de shared_buffers de 50 Mo à 1 Go
> * passage de effective_cache_size de 128 Mo à 1 Go
>
> Différence de config entre test 8 et test 9 :
> * passage de shared_buffers de 1 Go à 50 Mo
> * passage de effective_cache_size de 1 Go à 1,3 Go
>
>
> Différence entre votre config et la mienne :
> * passage de shared_buffers de 32 Mo à 1 Go
> * passage de wal_buffers de 64 Ko à 1 Mo
> * passage de checkpoint_segments de 3 à 30
> * passage de checkpoint_timeout de 5min à 15 min
> * passage de effective_cache de 128 Mo à 1 Go


Les gros paramètres ayant étés modifiés étant shared_buffers et
effective_cache_size, avec de telles valeurs il faut une
machine ayant au moins 3gigas de mémoire.

Je viens donc ce matin de faire deux tests sur un serveur
avec une base Postgresql , et les paramètres:

--- Parametres Postgresql:
--- shared_buffers = 1024MB
--- checkpoint_segments = 30
--- wal_buffers = 1024kB
--- autovacuum = off
--- effective_cache_size = 1024MB
---
--- Parametres Systeme Linux:
--- memoire: 4GB
---
---
--- Remarque: le file-systeme de la base de donnees est duplique par
--- drdb de linux-ha sur une seconde machine via un
--- lien 1giga prive.

Et j'obtiens en secondes:

UPDATE ANALYZE CHECKPOINT SELECT
test1: 1086 191 119 111
test2: 1310 223 136 701


Pour le test1, j'avais arrêté toute autre activite a la base
Postgresql.
Pour le test2, j'ai rétabli l'alimentation d'autres tables.

En surveillant l'activite du processus 'postgres: writer process',
et en utilisant une commande 'iostat -k 60 60' on voit qu'il y a
beaucoup d'activité d'écriture disque durant le select, et celui-ci
ne termine que vers la fin d'activité du bgwriter.

==================================================================

Remarque: j'ai fais le test suivant sur le PC
bureautique à coté , en pensant amméliorer les temps de selects
grace à une attente de 10 minutes avant le select:

--- Parametres Postgresql:
--- shared_buffers = 128MB
--- checkpoint_segments = 3
--- wal_buffers = 64kB
--- autovacuum = on
--- effective_cache_size = 128MB
---
--- Parametres Systeme Linux:
--- memoire: 1,8 gigas

1) UPDATE de 51872 lignes en 512 secondes
2) analyse verbose en 22 secondes
3) CHECKPOINT en 7 secondes
4) \!iostat -k 60 10
5) select 0 lignes, en 431 secondes

sur la machine, je n'ai pas d'autre activités en dehors de
mon test. On voit avec la commande 'iostat' que le processus
bgwriter de Postgresql ne travaille pas, et recommence son
travail lors du select.

J'aurais pensé, qu'il profita du temps libre (durant iostat)
pour mettre à jour les fichiers disque de la base.

Merci, Valérie.

>
--

********************************************************************
* Les points de vue exprimes sont strictement personnels et *
* n'engagent pas la responsabilite de METEO-FRANCE. *
********************************************************************
* Valerie SCHNEIDER Tel : +33 (0)5 61 07 81 91 *
* METEO-FRANCE / DSI/DEV Fax : +33 (0)5 61 07 81 09 *
* 42, avenue G. Coriolis Email : Valerie.Schneider@meteo.fr *
* 31057 TOULOUSE Cedex 1 - FRANCE

http://www.meteo.fr

*
********************************************************************


--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

[COMMITTERS] pginstaller - pginst: Update notes

Log Message:
-----------
Update notes

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pginst:
RELEASE_PROCESS.txt (r1.3 -> r1.3.2.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pginstaller/pginst/RELEASE_PROCESS.txt.diff?r1=1.3&r2=1.3.2.1)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] muninpgplugins - muninpgplugins: add common files

Log Message:
-----------
add common files

Added Files:
-----------
muninpgplugins:
AUTHORS (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/muninpgplugins/muninpgplugins/AUTHORS?rev=1.1&content-type=text/x-cvsweb-markup)
COPYING (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/muninpgplugins/muninpgplugins/COPYING?rev=1.1&content-type=text/x-cvsweb-markup)
ChangeLog (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/muninpgplugins/muninpgplugins/ChangeLog?rev=1.1&content-type=text/x-cvsweb-markup)
INSTALL (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/muninpgplugins/muninpgplugins/INSTALL?rev=1.1&content-type=text/x-cvsweb-markup)
README (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/muninpgplugins/muninpgplugins/README?rev=1.1&content-type=text/x-cvsweb-markup)
TODO (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/muninpgplugins/muninpgplugins/TODO?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

RE: [pgsql-es-ayuda] Duda sql bases de datos


Hola: Siguiendo vuestras indicaciones, he creado la funciĂłn siguiente:

String cod = '001';
String miTabla = "elementosNaturales";
//tiene el mismo nombre y la misma estructura en la BD desde la que copio (BD1) y en la BD a la que escribo (BD2);
 

Statement stmt1 = conn.createStatement();

 

stmt1.execute("CREATE OR REPLACE FUNCTION volcadoDatos() RETURNS void AS $$"

+"DECLARE "

+" fila RECORD;"

+"BEGIN"

+" SELECT dblink_connect('dbname = bd_donde_copiar user = usuario password = 111111);"

+" FOR fila IN SELECT * FROM " + miTabla + " WHERE codigo= '" + cod+ "' LOOP"

+" SELECT dblink_exec('INSERT INTO "+ miTabla +" VALUES(fila)');"

+" END LOOP;"

+" SELECT dblink_disconnect();"

+"END;"

+"$$ LANGUAGE plpgsql;");

 

stmt1.close();

Me ejecuta todo, pero no copia nada en la tabla de BD2. Creo que el problema está en la línea:
+" SELECT dblink_exec('INSERT INTO "+ miTabla +" VALUES(fila)');"
 
Por hacer VALUES(fila) en vez de insertar campo a campo. Lo que pasa es que ambas tablas tienen la misma estructura, son totalmente idĂ©nticas, y con muchos campos. Además, Ă©sta funciĂłn la ejecutarĂ© sobre varias tablas distintas, por lo que tiene que ser genĂ©rica. ¿Alguien sabe cĂłmo puedo solucionarlo?
 
Muchas gracias por su ayuda, Ă©sto es muy importante para mi y no consigo hacerlo.
Gracias otra vez.
 
Laura




> Date: Wed, 28 May 2008 12:40:25 -0400
> From: marcoantoniofrias@gmail.com
> To: lauraleyton@hotmail.es
> Subject: Re: FW: [pgsql-es-ayuda] Duda sql bases de datos
> CC: pgsql-es-ayuda@postgresql.org
>
> 2008/5/28 Laura reiva <lauraleyton@hotmail.es>:
> > Hola,
> > La base de datos original no la puedo aumentar más pues ya contiene
> > alrededor de cien tablas y debo realizar Ă©sta "copia" con varias de ellas,
> > por lo que crear una nueva tabla en la base de datos original para cada una
> > de ellas sería muy complejo. Tendré que usar dblink, aunque no lo conozco.
> > Tengo instalado postgreSQL 8.1. ¿PodrĂ­a explicarme cĂłmo funciona el producto
> > dblink?
>
> 1ro. instalar dblink en la base de datos 1:
>
> $psql -d db1 -U usuario -f /direccion/al/archivo/dblink.sql
>
> 2do. creas una funciĂłn con un contenido parecido a este:
>
> CREATE OR REPLACE FUNCTION dblink_db1_db2() RETURNS VOID AS $$
> DECLARE
> fila RECORD;
> BEGIN
>
> -- inicias la conexion
> SELECT dblink_connect('dbname=db2 user=usuario password=contrasena');
>
> -- obtienes los registros de la db1 (conexion actual) [puedes hacer
> con cursores también]
> FOR fila IN SELECT * FROM tablaAlumnos_db1 WHERE sexo = 'M' LOOP
> -- insertas en la db2. fila: es la estructura que obtiene de cada
> fila de la tabla
> -- campo1,campo2,etc: son los atributos de la tabla de la 2da base.
> SELECT dblink_exec('INSERT INTO tablaAlumnos_db2 VALUES(' ||
> fila.campo1 || ',' || fila.campo2 || ');');
> END LOOP;
>
> -- cierras la conexion
> SELECT dblink_disconnect();
>
> END;
> $$ LANGUAGE plpgsql;
>
> 3ro. ejecutas la funciĂłn:
>
> SELECT dblink_db1_db2();
>
> Dale una leĂ­da a los archivos dentro de contrib/dblink/doc del cĂłdigo fuente.
>
> --
> Saludos y abrazos...
>
> Marco Antonio Frias ButrĂłn
> Slackware Linux User
> Linux Registered User #356229 - http://counter.li.org/



Sigue al minuto las principales noticias de tu ciudad MSN Deportes

[COMMITTERS] pginstaller - pginst: Prep for release

Log Message:
-----------
Prep for release

Tags:
----
REL8_2_STABLE

Modified Files:
--------------
pginst/package:
README.TXT (r1.4.2.8 -> r1.4.2.9)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pginstaller/pginst/package/README.TXT.diff?r1=1.4.2.8&r2=1.4.2.9)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[SQL] Parallel updates on multiple cores

I have the following case: a simple table

drop table test_data;
create table test_data (
id bigserial not null primary key,
content varchar(50),
processed varchar(1)
);

My function doing the inserts

CREATE OR REPLACE FUNCTION populate_test_data(IN nr_records BIGINT, IN proc_nr BIGINT) RETURNS integer AS $$
DECLARE
    counter BIGINT := 0;
    record_val text;
BEGIN
LOOP    
    counter:=counter+1;
    record_val:=((('v ' || counter) || ' p ') || proc_nr);
    insert into test_data(content, processed) values(record_val,'n');
    EXIT WHEN counter > nr_records;
END LOOP;
RETURN 0;
END;
$$ LANGUAGE plpgsql;

where nr_records represents the number of inserts, and

CREATE OR REPLACE FUNCTION select_unprocessed(IN start_id BIGINT, IN end_id BIGINT) RETURNS integer AS $$
DECLARE
    counter BIGINT := 0;
    record_val text;
    rec record;

BEGIN
FOR rec IN SELECT id, content, processed FROM test_data WHERE id >= start_id AND id < end_id
LOOP        
    record_val:=rec.content || '-DONE-';
    update test_data set content=record_val, processed='n' where id=rec.id;
END LOOP;
RETURN 0;

END;
$$ LANGUAGE plpgsql;

The function above updates the rows between the ids start_id and end_id.
I have a quad core procesor so i run two separate connections to the database: select populate_test_data(5000,1) and another select populate_test_data(5000,2). In this case each function runs on one core doing the inserts in parallel, but when i try to run select select_unprocessed(1,5001) and from another connection select select_unprocessed(5001, 10001), one of the processes locks the table so the other one has to wait until the table is unlocked.
Each process updates different parts of the table.
Is there a way to do the updates in parallel on multiple cores?

Re: [pgsql-advocacy] PostgreSQL derivatives

On Thu, 2008-06-05 at 17:08 -0400, Robert Treat wrote:
> On Thursday 05 June 2008 11:24:44 Seth Grimes wrote:
> > I give a plug to to PostgreSQL as the base for a variety of data
> > warehousing products in a blog article, ParAccel Excels By Tapping
> > Experience and Open Source, at
> > http://www.intelligententerprise.com/blog/archives/2008/06/how_paraccel_ex.
> >html
> >
>
> AFAIK Truvisio does not distribute any open source products. And actually, I
> don't know of any contributions they have made directly to the community.
> They hired command prompt to do some work, and that was contributed back, but
> not sure on the specifics (i'd guess contributing back might have been part
> of the deal)... the people I know who work there are swell guys though! :-)
>
> Greenplum does distribute an open source product, specifically the bizgres
> database. It's not widely used, but it is still available, so they should
> probably be credited as such.

Greenplum sponsored my efforts to include Partitioning and Sort
improvements into 8.1 and 8.2 respectively, and have contributed at
least 3 other patches that the community has rejected (for whatever
reason). IMHO, Bizgres was really their way of showing that useful work
had been done, but I agree it is out of date now in many respects
because and only because it hasn't been updated since 8.1.

--
Simon Riggs

www.2ndQuadrant.com

PostgreSQL Training, Services and Support


--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [ADMIN] invalid page header in block 29 of relation "pg_type"

I hope, I've solved it.

I found 512 damaged bytes on the 29th 8k page of the pg_type's datafile
($PGDATA/base/16390/1247), it was overwritten by text from some log(?!).

When I had dropped the whole 8k page (dd of=/tmp/head count=29 bs=8192,
dd of=/tmp/tail skip=30 bs=8192, then cat /tmp/head /tmp/tail), the
pg_dumpall utility was able to dump the DB instance then.

I compared schemas (pg_dumpall -s), fortunately only several empty, not
used tables were affected.

Then I imported the dump, and now the DB instance seems to work properly,
also the pg_dumps, vacuum anylysis.

Best regards,

Filip Krska

--

ComSTAR spol. s r. o.
TĹ™ebohostická 14
100 31 Praha 10

HotLine (pev.): 274 016 000
HotLine (mob.): 777 343 857


On Fri, 6 Jun 2008, Alvaro Herrera wrote:

> Filip Krška wrote:
>> Hello,
>>
>> we have (maybe due to temporary Disk Array HW failure, which occurred on
>> the same day the pg_dump started to complain - array is now successfully
>> rebuilt) problem with consistency of pg_catalog.pg_type table.
>
> Did you solve this problem?
>
> --
> Alvaro Herrera

http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Automating our version-stamping a bit better

Andrew Dunstan wrote:
> Tom Lane wrote:
>>
>> I'm tempted to suggest letting the script invoke autoconf, too,
>> but that would require standardizing where to find the correct
>> version of autoconf for each branch; so it might not be such a
>> great idea.
>
> Unfortunately that's true. Maybe we could agree on using an alias for
> the right version of autoconf, but it seems likely to be error prone.

Or we could sidestep the issue by not running autoconf, but
search-replace the version strings in configure directly with the perl
script.

--
Heikki Linnakangas
EnterpriseDB

http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[PATCHES] Minor pedantry for "help" text

*** a/src/bin/psql/mainloop.c
--- b/src/bin/psql/mainloop.c
***************
*** 177,188 **** MainLoop(FILE *source)
(line[4] == '\0' || line[4] == ';' || isspace((unsigned char) line[4])))
{
free(line);
! puts(_("\nYou are using psql, the command-line interface to PostgreSQL."));
! puts(_("\t\\? for psql help"));
! puts(_("\t\\h or \\help for SQL help\n"));
! puts(_("\t\\g or \";\" to execute a query"));
! puts(_("\t\\q to quit psql\n"));
! puts(_("\t\\copyright to view the copyright\n"));

fflush(stdout);
continue;
--- 177,188 ----
(line[4] == '\0' || line[4] == ';' || isspace((unsigned char) line[4])))
{
free(line);
! puts(_("You are using psql, the command-line interface to PostgreSQL."));
! printf(_("Type: \\copyright for distribution terms\n"
! " \\h for help with SQL commands\n"
! " \\? for help with psql commands\n"
! " \\g or terminate with semicolon to execute query\n"
! " \\q to quit\n"));

fflush(stdout);
continue;
Attached is a patch that makes some minor changes to the text emitted by
the new "help" command. Previous output:

postgres=# help

You are using psql, the command-line interface to PostgreSQL.
\? for psql help
\h or \help for SQL help

\g or ";" to execute a query
\q to quit psql

\copyright to view the copyright

postgres=#

New output:

postgres=# help
You are using psql, the command-line interface to PostgreSQL.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
postgres=#

The newlines in the previous text were inconsistent with psql error
message style elsewhere, aside from being distracting. The advice on
commands to enter next was also just emitted, without actually telling
the user that these are possible inputs. Essentially the text was a
regression from the text we've always used in the startup banner, so I
just re-instituted the old text.

-Neil, doing his best to suppress his aesthetic objection to having
"help" be valid psql input in the first place