Thursday, July 24, 2008

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

Hi

One of reasons to get PL/proxy into core is to make it available to Windows users also.
The idea is to get to the situation

createlang plproxy mydb

If we can achieve this without putting plproxy into core then i would like to hear how.

Asko

On Fri, Jul 25, 2008 at 2:19 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Robert Haas" <robertmhaas@gmail.com> writes:
> ISTM that if that if you're willing to admit, even with caveats, that
> PL/perl, PL/tcl, or PL/python doesn't "need" to be in core, then
> excluding anything else from core on the basis that it doesn't need to
> be there is silly.

You are merely setting up a straw man, as no one has suggested such a
policy.  Any specific decision of this type is going to involve a
combination of factors, and that's only one.

                       regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [RFC] Unsigned integer support.

Hello Tom,

On Thu, Jul 24, 2008 at 10:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Ryan Bradetich" <rbradetich@gmail.com> writes:
>> I am looking to take advantage of PostgreSQL extensible type system
>> and implement unsigned integer support.
>
> This has been proposed before, and foundered before on the question
> of implicit coercions. If you're willing to make all coercions *to*
> unsigned types be explicit (or at most assignment), then I think
> it can be made to work without breaking anything. But usually the
> folk who ask for this feature are hoping that bare integer literals
> like "42" will get interpreted as unsigned when they want them to be.
> The problem with that wish is illustrated by
>
> select 1500000000 + 1500000000;
>
> These literals might be either int4 or uint4, therefore this command
> might yield either an integer-overflow error or 3000000000::uint4.
> That's not a distinction you can fuzz over --- it's got to be one
> or the other, and backwards compatibility says it'd better be the
> first.

I am in agreement with you on this. Since SQL does not specify
unsigned types, I was assuming only explicit and assignment casts.
I should have probably mentioned that in the RFC. Thanks for
pointing this out.

My main goal for this type is the reduced storage space. I am fine
with people needing to cast to the unsigned types to benefit from the
reduced storage space.

My plans for the example above would be:

1. SELECT 1500000000 + 1500000000 --> Throws overflow error.
2. SELECT 1500000000::uint4 + 1500000000 --> Returns 3000000000::uint4.


>
>> I am hoping the removal of many of the implicit casts in
>> PostgreSQL 8.3 will simplify this task to where this objection can be
>> removed.
>
> The implicit casts we removed were cross-type-category cases.
> If you hope for unsigned types to be considered part of the numeric
> category, there's no guidance for you there. In fact, the real nub
> of the problem is what type shall be initially assigned to an
> integer-looking literal, and how will you get things to behave sanely
> if that initial choice wasn't what was desired. We still have some
> issues around the fact that "42" isn't considered a smallint. Throwing
> in another possible meaning isn't going to help.
>
>> My understanding is the SQL standard does not provide support for
>> unsigned integers, so I am planning on making all casts from unsigned
>> integers to other data types explicit.
>
> It's really the other direction that would be contentious ...
>
> regards, tom lane

Thanks for your comments! I have already started to play around a bit with
the types and will hopefully have some code ready for review / feedback soon.

- Ryan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Sequence

Alex Cheshev wrote:
> Hello.
> A table has two primary keys:

It appears that you really meant "a table has a two-field composite
primary key". There can only be one primary key for a table, that's the
whole point - but the primary key can be composed of more than one field.

[Note: if you format your SQL when you post, more people will bother to
read it and try to help you out. For example, your CREATE TABLE could've
been better written as:]

> CREATE TABLE example (
> pk1 integer,
> pk2 integer,
> PRIMARY KEY (pk1, pk2)
> );

> To add a new record I use command:

> INSERT INTO example (pk1, pk2) VALUES (0, 0).

> Before adding the new
> record I have to find out the last value of pk2. How can I use something
> like this:

> INSERT INTO example (pk1, pk2) VALUES (0, nextval('pk2'))

> ?

> If a table just has one primary key I can use sequence (CREATE
> SEQUENCE). What about two primary keys?

You can still use a SERIAL type or manually use CREATE SEQUENCE and
nextval() .

I suspect I'm missing the point of your question, though. Perhaps if you
gave a real-world example of what you are trying to do, with meaningful
field names?

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Column level privileges was:(Re: [HACKERS] Extending grant insert on tables to sequences)

On Thu, Jul 24, 2008 at 12:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Jaime Casanova" <jcasanov@systemguards.com.ec> writes:
>>> Another issue is the interaction with the planned column-level GRANT
>>> feature.
>
>> Although that is a feature we want, is a WIP one... do we stop patches
>> because it can conflict with a project we don't know will be applied
>> soon?
>
> Well, considering that that one is implementing a feature required by
> SQL spec, your feature will lose any tug-of-war ;-).

i knew the answer already but...

ok, seems this is the last one for column level patch
http://archives.postgresql.org/pgsql-patches/2008-04/msg00417.php

any one working it...

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Sequence

am Fri, dem 25.07.2008, um 15:54:23 +1100 mailte Alex Cheshev folgendes:
> Hello.
> A table has two primary keys: CREATE TABLE example ( pk1 integer, pk2
> integer, PRIMARY KEY (pk1, pk2) ). To add a new record I use command:
> INSERT INTO example (pk1, pk2) VALUES (0, 0). Before adding the new
> record I have to find out the last value of pk2. How can I use something
> like this: INSERT INTO example (pk1, pk2) VALUES (0, nextval('pk2'))?
> If a table just has one primary key I can use sequence (CREATE
> SEQUENCE). What about two primary keys?

You can also use a sequence.

Regards, Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] [RFC] Unsigned integer support.

"Ryan Bradetich" <rbradetich@gmail.com> writes:
> I am looking to take advantage of PostgreSQL extensible type system
> and implement unsigned integer support.

This has been proposed before, and foundered before on the question
of implicit coercions. If you're willing to make all coercions *to*
unsigned types be explicit (or at most assignment), then I think
it can be made to work without breaking anything. But usually the
folk who ask for this feature are hoping that bare integer literals
like "42" will get interpreted as unsigned when they want them to be.
The problem with that wish is illustrated by

select 1500000000 + 1500000000;

These literals might be either int4 or uint4, therefore this command
might yield either an integer-overflow error or 3000000000::uint4.
That's not a distinction you can fuzz over --- it's got to be one
or the other, and backwards compatibility says it'd better be the
first.

> I am hoping the removal of many of the implicit casts in
> PostgreSQL 8.3 will simplify this task to where this objection can be
> removed.

The implicit casts we removed were cross-type-category cases.
If you hope for unsigned types to be considered part of the numeric
category, there's no guidance for you there. In fact, the real nub
of the problem is what type shall be initially assigned to an
integer-looking literal, and how will you get things to behave sanely
if that initial choice wasn't what was desired. We still have some
issues around the fact that "42" isn't considered a smallint. Throwing
in another possible meaning isn't going to help.

> My understanding is the SQL standard does not provide support for
> unsigned integers, so I am planning on making all casts from unsigned
> integers to other data types explicit.

It's really the other direction that would be contentious ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Sequence

Hello.
A table has two primary keys: CREATE TABLE example ( pk1 integer, pk2
integer, PRIMARY KEY (pk1, pk2) ). To add a new record I use command:
INSERT INTO example (pk1, pk2) VALUES (0, 0). Before adding the new
record I have to find out the last value of pk2. How can I use something
like this: INSERT INTO example (pk1, pk2) VALUES (0, nextval('pk2'))?
If a table just has one primary key I can use sequence (CREATE
SEQUENCE). What about two primary keys?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[HACKERS] [RFC] Unsigned integer support.

Hello hackers,

I know the development community is in the middle of the July 2008
commit-fest, so I apologize if this design proposals are in
appropriate at this time.

I am looking to take advantage of PostgreSQL extensible type system
and implement unsigned integer support. The data I am dealing with
contains
many unsigned data types and I am planning on using this type to
reduce storage consumption. I am initially looking to add support for
16-bit and
32-bit unsigned integers with the potential to add 8-bit and 64-bit
unsigned integers if needed or desired by the community.

Searching through the list archives, I found two objections raised in the past:


1. Implicit casts between different data types.

I am hoping the removal of many of the implicit casts in
PostgreSQL 8.3 will simplify this task to where this objection can be
removed.

My plan (without much experimentation) is to have maybe a handful
of casts (something like):
* uint4 -> integer
* integer -> uint4
* bigint -> uint4
* integer -> uint2
* uint2 -> smallint

and then provide operators to provide a reasonable set of
functionality. My initial thought for this functionality is to
provide default operators on any
type that is not implicitly casted on the psql command-line.

As an example, I am planning for the following SQL statements to
work correctly:

3000000000::uint4 + 10 and
3000000000::uint4 > 10

My understanding is the SQL standard does not provide support for
unsigned integers, so I am planning on making all casts from unsigned
integers
to other data types explicit. Is this acceptable to the community?

Another question for the community is should we allow the following cast?
-1::uint4

Even though this is acceptable c-code, I am leaning towards
throwing an out-of-range error when this occurs.

Are there some areas I am missing or should investigate further
before working on this project?


2. There is not much demand for unsigned integer types.

Not much I can do about that :) I am willing to post my work as
a PgFoundry project.

PgFoundry already has an uint project:
http://pgfoundry.org/projects/uint/

Unfortunately this project seems to have not gone anywhere. Last
activity was late 2006 and there are not any files checked into the
SCM repository.
Is it acceptable to hijack this PgFoundry project? Or should I
start a new project (assuming there is any interest in publishing this
work).


Although I am not targeting inclusion for this type in the core
PostgreSQL code, I would like to post code for review and receive
feedback from the
community on this work. As I understand this RFC is the first step in
the process :) Once I have some code ready for review, is it
acceptable to use the
commit-fest wiki for this project?

Thanks much for your time!

- Ryan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] php + postgresql

> 2.
> Another problem was that no matter how many times I checked and
> re-checked code, or which pg_fetch_* function I used, copying an array
> member and trying to use it later just would not work, eg
>
> while ($row = pg_fetch_array($query)) {
> $content = $row[0]
> }
>
> echo $content;


pg_fetch_array expects a result from a query, not an actual sql query.

You need something like this:

$query = "select id, name from tablename";
$result = pg_query($query);
while ($row = pg_fetch_array($result)) {
$content = $row[0];
}


> 3.
> Some examples I found used PHP's pg_num_rows() function to count the
> rows in a result, then iterated through them with a "for" loop ... is
> this required behaviour (PHP docs don't appear to discuss this)?

You used to have to do this but you don't any more.

The old style was something like:

<?php
....

$result = pg_query($query);
$rows = pg_num_rows($result);

for ($row_num = 0; $row_num < $rows; $row_num++) {
$db_row = pg_fetch_array($result, $row_num);
}

The new style works like:

<?php
....

$result = pg_query($query);
while ($row = pg_fetch_array($result)) {
$db_row = pg_fetch_array($result);
}


This was changed a lot time ago (according to the php manual - 4.1.0).

> 4.
> Another weird one was that this statement always failed:
>
> $name = "file.php";
> SELECT fld_content FROM tbl_page WHERE fld_name='$name'

Escape your data:

$name = 'blah';
$query = "SELECT fld_content FROM tbl_page WHERE fld_name='" .
pg_escape_string($name) . "'";

<shameless plug>
I have some intro guides here you might want to check out:

http://www.designmagick.com/category/2/
</shameless plug>

--
Postgresql & php tutorials
http://www.designmagick.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

On Thu, Jul 24, 2008 at 2:37 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Hannu Krosing <hannu@krosing.net> writes:
>> And it would be nice, if some well-maintained sample language (pl/sh or
>> even pl/dummy) which serves as a sample of latest ways to make use of
>> pl/function support in core pg code would be included in core as well.
>
> And why do you think the above three don't serve that purpose? Or even
> more to the point, how likely is it that an unused "dummy" language
> would be well-maintained?

For whatever it's worth, I'm in the middle of writing a PL
(PL/LOLCODE, specifically), and have found helpful examples of how to
do stuff in PL/pgSQL, PL/Perl, *and* pl/proxy. The examples in the
documentation followed by a bunch of hair pulling while reading
PL/pgSQL were enough to get started, without the benefit of a dummy
language. That's not to say that a dummy language wouldn't be useful,
only that for a coder of my caliber (i.e. Not Terribly Skilled but
Able to Code Myself Out of a Wet Paper Bag) it wasn't necessary.
Because pl/proxy is not in core, I didn't immediately look to it for
examples, but was pointed there by a helpful soul on IRC.

My own opinion is that though there have been several in recent years,
new PLs are written rarely enough that "best practices" don't change a
whole lot. PL/Perl and PL/pgSQL particularly are very well maintained,
and thus demonstrate in most cases a perfectly acceptable way of
writing a PL.

As to whether or not pl/proxy should be in core, I have no particular
opinion. PL/LOLCODE probably should not be. :)

- Josh / eggyknap

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] php + postgresql

> There does seem to be some evidence of problems historically with PHP
> and persistent connections in PostgreSQL, on the PHP forums. The advice
> is typically to avoid them.

You'll find the same advice for mysql + persistent connections or any
other db + persistent connections. It's not a php+postgres thing.

--
Postgresql & php tutorials
http://www.designmagick.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] FK check will use index on referring table?

On Thu, Jul 24, 2008 at 7:06 PM, Stephen Frost <sfrost@snowman.net> wrote:
> * John D. Burger (john@mitre.org) wrote:
>> My understanding is that PG will use an index on the referring side of a
>> foreign key for FK checks. How can I tell whether it's doing that?
>
> It should, when it makes sense, yes. Having the actual schema
> definitions would help in debugging this, of course.
>
>> EXPLAIN ANALYZE just shows something like this:
>>
>> => explain analyze delete from segments where segmentid = 24305259;
>
> What does:
>
> explain analyze
> delete from tokenizedSegments
> where segmentId = 24305259;
>
> look like?

If you're doing that put it in a begin; rollback; sandwich so your
rows don't disappear while testing.

Or just do a

explain analyze select 1 from tokenizedSegments where segentId=23405259;

the plan would be the same I'd think

>
> If more than a few percent of the tokenizedSegments table has a
> segmentId of 24305259 then PG may rightly be scanning the whole table
> sequantially because going through it randomly with an index would be
> slower. There's a few options which can tune those parameters in the
> planner, of course, but you might consider doing a test
> 'set enable_seqscan = false;' first, if it's indeed doing one, to see
> what the difference really is.
>
> Thanks,
>
> Stephen
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkiJJxIACgkQrzgMPqB3kighnACfd1AaKusTxFaKIqcqEjAmvRwm
> LmwAnR0YegtP/rr84LiVVAMJUv3dYOMj
> =dPu1
> -----END PGP SIGNATURE-----
>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] FK check will use index on referring table?

Stephen Frost <sfrost@snowman.net> writes:
> * John D. Burger (john@mitre.org) wrote:
>> My understanding is that PG will use an index on the referring side of a
>> foreign key for FK checks. How can I tell whether it's doing that?

There isn't any very good way at the moment :-(

> If more than a few percent of the tokenizedSegments table has a
> segmentId of 24305259 then PG may rightly be scanning the whole table
> sequantially because going through it randomly with an index would be
> slower. There's a few options which can tune those parameters in the
> planner, of course, but you might consider doing a test
> 'set enable_seqscan = false;' first, if it's indeed doing one, to see
> what the difference really is.

One thing to keep in mind while experimenting is that the plan for an FK
update query is cached the first time the particular trigger is fired in
a session; and in 8.2 I don't think there's any way to un-cache it short
of starting a fresh session. This won't affect manual experimentation
of course, but if you do something that you are hoping will change the
trigger's behavior (like fooling with enable_seqscan), be sure to start
a new session first.

Also, the trigger's internal FK query will be parameterized; so the
closest manual equivalent will be something like

prepare foo(int) as delete from tokenizedSegments where segmentId = $1;
explain execute foo(24305259);

(adjust datatype of parameter to match segments.segmentID...) and here
again remember that "prepare" caches the plan.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] FK check will use index on referring table?

* John D. Burger (john@mitre.org) wrote:
> My understanding is that PG will use an index on the referring side of a
> foreign key for FK checks. How can I tell whether it's doing that?

It should, when it makes sense, yes. Having the actual schema
definitions would help in debugging this, of course.

> EXPLAIN ANALYZE just shows something like this:
>
> => explain analyze delete from segments where segmentid = 24305259;

What does:

explain analyze
delete from tokenizedSegments
where segmentId = 24305259;

look like?

If more than a few percent of the tokenizedSegments table has a
segmentId of 24305259 then PG may rightly be scanning the whole table
sequantially because going through it randomly with an index would be
slower. There's a few options which can tune those parameters in the
planner, of course, but you might consider doing a test
'set enable_seqscan = false;' first, if it's indeed doing one, to see
what the difference really is.

Thanks,

Stephen

Re: [HACKERS] Additional psql requirements

Simon Riggs <simon@2ndquadrant.com> wrote:

> * access to version number
> * simple mechanism for conditional execution
> * ability to set substitution variables from command execution
> * conditional execution whether superuser or not

Can we use pgScript for such flow controls?
http://pgscript.projects.postgresql.org/INDEX.html

I'm not sure pgScript can be used in pgAdmin already, but if we support
it both psql and pgAdmin, the scripting syntax will be a defact standard
because they are the most major user interfaces to postgres. I think it is
not good to add another "dialect" that can be used only in psql.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [JDBC] GSSAPI/KRB5 and JDBC (again)

* Peter Koczan (pjkoczan@gmail.com) wrote:
> Hello again, I've finally had time to test out Kris' patches for
> GSSAPI support in the Postgres 8.3 JDBC
> (http://ejurka.com/pgsql/jars/gss), but I'm not quite getting it
> right, it seems. The JDBC built fine, and it connects properly via
> MD5, but it can't get a GSSAPI connection, even with Kris' advice in
> http://archives.postgresql.org/pgsql-jdbc/2008-01/msg00154.php

I got it working under Linux based off of his comments..

> Caused by: javax.security.auth.login.LoginException: Integrity check
> on decrypted field failed (31)

So you know, that generally means "wrong password". Have you tried
kinit'ing first? Is it prompting you for a password?

> Am I forgetting to set something up? I know that GSSAPI in JDBC is
> unsupported currently but it would be really nice to offer it even in
> a beta capacity. Thanks.

I'm *really* anxious to have GSSAPI support in JDBC and fully
supported.. I've got it working in a test rig, but I need it working
under Linux and Windows for a number of clients and I havn't had time to
make sure all the issues are worked through. :/

Thanks!

Stephen

Re: [HACKERS] pg_dump vs data-only dumps vs --disable-triggers

* Tom Lane (tgl@sss.pgh.pa.us) wrote:
> Simon's patch to split up --schema-only into two switches has broken
> this logic, but I'm inclined to just rip it out rather than trying
> to fix it. If the user says --disable-triggers, he should get
> trigger disable commands around the data part of the dump, no matter
> what he said or didn't say about schema dumping.

Agreed entirely. I was suspicious of that bit of code but, honestly,
wasn't quite sure what the original intent had been. Sorry I havn't
finished my review of his latest, he went and changed all the macro
names. ;) If you're taking care of it then I won't spend any further
time on it.. I'm pretty comfortable with it now, honestly, provided the
renaming didn't change anything and my last set of comments were
included.

Thanks,

Stephen

[COMMITTERS] clearxlogtail - clearxlogtail: Replace and err() with portable

Log Message:
-----------
Replace <err.h> and err() with portable code.

Modified Files:
--------------
clearxlogtail:
pg_clearxlogtail.c (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/clearxlogtail/clearxlogtail/pg_clearxlogtail.c.diff?r1=1.1&r2=1.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-es-ayuda] [OT]Sobre Implementar servicio de servidores virtuales

Empresas como www.joyent.com hacen lo que vos queres hacer, trabajan con zonas de solaris en equipos dell 2950 de hasta 32 GB de RAM, con un bruto storage detras.

Son 4500 servidores, no te piden contrato, solamente que pagues por el tiempo que lo utilizas. (minimo un mes creo, pero cada zona arranca con un minimo de 512 de RAM por 120 U$S con root passwd, java, mysql, postgresql, tomcat,etc ya instalados y optimizados para la configuracion que tengas)

Bastante parecido al servicio que ofrece Amazon de hecho.

gb.-

2008/7/24 Alvaro Herrera <alvherre@alvh.no-ip.org>:
Espartano escribió:

> Una buena solución de "virtualizacion" con kernel compartido es las
> jails de FreeBSD:
>
> http://www.freebsd.org/doc/es_ES.ISO8859-1/books/handbook/jails.html
>
> pero como dice gunnar lo difícil es restringir los recursos a cada
> vserver, para las jails se esta desarrollando quotas para memoria y
> CPU pero aun no esta suficientemente maduro además de que no funciona
> con el nuevo squedule ULE que vino con FreeBSD 7.0 (no por default)
> otro detalle que no se si aun persista es que las jails tenían
> problemas con postgresql por el uso de IPC no se si aun sea así, eso
> tiene muchos años que lo vi.

Todavía tiene ese problema.  Creo que la "solución" para esto es emplear
distintos usuarios para correr Postgres en cada jaula.

--
Alvaro Herrera                          Developer, http://www.PostgreSQL.org/
"The problem with the facetime model is not just that it's demoralizing, but
that the people pretending to work interrupt the ones actually working."
                                                          (Paul Graham)
--
TIP 7: no olvides aumentar la configuración del "free space map"



--
Guido Barosio
-----------------------
http://www.globant.com
guido.barosio@globant.com

[COMMITTERS] clearxlogtail - clearxlogtail: Support PGXS build if USE_PGXS is

Log Message:
-----------
Support PGXS build if USE_PGXS is defined.

Modified Files:
--------------
clearxlogtail:
Makefile (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/clearxlogtail/clearxlogtail/Makefile.diff?r1=1.1&r2=1.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] [PERL DBI] Insertiing data across multiple tables

JD Wong wrote:
> Does anybody know how to insert data over multiple tables
> transactionally? The relationship is 1:1 with the latter table having
> a foreign key constraint. In order to add data to Table2 I need to
> know the primary key value of the same record in Table1 before it's
> committed, which in this case is a serial integer which of course is
> regulated by an implicit index.

Have a look at INSERT ... RETURNING. It can feed back the new rows to
your application so that you can make your 2nd insert without having to
pre/reselect them (assuming your running 8.2+)

klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789
Fax: 02 6773 3266
EMail: kgore4@une.edu.au


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PATCHES] [HACKERS] WITH RECUSIVE patches 0723

> Now, I get a different problem, this time with the following code
> intended to materialize paths on the fly and summarize down to a
> certain depth in a tree:
>
> CREATE TABLE tree(
> id INTEGER PRIMARY KEY,
> parent_id INTEGER REFERENCES tree(id)
> );
>
> INSERT INTO tree
> VALUES (1, NULL), (2, 1), (3,1), (4,2), (5,2), (6,2), (7,3), (8,3),
> (9,4), (10,4), (11,7), (12,7), (13,7), (14, 9), (15,11), (16,11);
>
> WITH RECURSIVE t(id, path) AS (
> VALUES(1,ARRAY[NULL::integer])
> UNION ALL
> SELECT tree.id, t.path || tree.id
> FROM tree JOIN t ON (tree.parent_id = t.id)
> )
> SELECT
> t1.id, count(t2.*)
> FROM
> t t1
> JOIN
> t t2
> ON (
> t1.path[1:2] = t2.path[1:2]
> AND
> array_upper(t1.path,1) = 2
> AND
> array_upper(t2.path,1) > 2
> )
> GROUP BY t1.id;
> ERROR: unrecognized node type: 203

Thanks for the report. We will look into this.

> Please apply the attached patch to help out with tab
> completion in psql.

Ok, it will appear in the next patches.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] [HACKERS] WITH RECUSIVE patches 0723

*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
***************
*** 613,621 **** psql_completion(char *text, int start, int end)
"COMMENT", "COMMIT", "COPY", "CREATE", "DEALLOCATE", "DECLARE",
"DELETE FROM", "DISCARD", "DROP", "END", "EXECUTE", "EXPLAIN", "FETCH",
"GRANT", "INSERT", "LISTEN", "LOAD", "LOCK", "MOVE", "NOTIFY", "PREPARE",
! "REASSIGN", "REINDEX", "RELEASE", "RESET", "REVOKE", "ROLLBACK",
"SAVEPOINT", "SELECT", "SET", "SHOW", "START", "TRUNCATE", "UNLISTEN",
! "UPDATE", "VACUUM", "VALUES", NULL
};

static const char *const backslash_commands[] = {
--- 613,621 ----
"COMMENT", "COMMIT", "COPY", "CREATE", "DEALLOCATE", "DECLARE",
"DELETE FROM", "DISCARD", "DROP", "END", "EXECUTE", "EXPLAIN", "FETCH",
"GRANT", "INSERT", "LISTEN", "LOAD", "LOCK", "MOVE", "NOTIFY", "PREPARE",
! "REASSIGN", "RECURSIVE", "REINDEX", "RELEASE", "RESET", "REVOKE", "ROLLBACK",
"SAVEPOINT", "SELECT", "SET", "SHOW", "START", "TRUNCATE", "UNLISTEN",
! "UPDATE", "VACUUM", "VALUES", "WITH", NULL
};

static const char *const backslash_commands[] = {
***************
*** 2044,2049 **** psql_completion(char *text, int start, int end)
--- 2044,2058 ----
pg_strcasecmp(prev2_wd, "ANALYZE") == 0))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);

+ /* WITH [RECURSIVE] */
+ else if (pg_strcasecmp(prev_wd, "WITH") == 0)
+ {
+ static const char *const list_WITH[] =
+ {"RECURSIVE", NULL};
+
+ COMPLETE_WITH_LIST(list_WITH);
+ }
+
/* ANALYZE */
/* If the previous word is ANALYZE, produce list of tables */
else if (pg_strcasecmp(prev_wd, "ANALYZE") == 0)
On Thu, Jul 24, 2008 at 01:55:37PM +0900, Tatsuo Ishii wrote:
> > Program received signal SIGSEGV, Segmentation fault.
>
> Thanks for the report. Here is the new patches from Yoshiyuki.

Thanks for the patch :)

Now, I get a different problem, this time with the following code
intended to materialize paths on the fly and summarize down to a
certain depth in a tree:

CREATE TABLE tree(
id INTEGER PRIMARY KEY,
parent_id INTEGER REFERENCES tree(id)
);

INSERT INTO tree
VALUES (1, NULL), (2, 1), (3,1), (4,2), (5,2), (6,2), (7,3), (8,3),
(9,4), (10,4), (11,7), (12,7), (13,7), (14, 9), (15,11), (16,11);

WITH RECURSIVE t(id, path) AS (
VALUES(1,ARRAY[NULL::integer])
UNION ALL
SELECT tree.id, t.path || tree.id
FROM tree JOIN t ON (tree.parent_id = t.id)
)
SELECT
t1.id, count(t2.*)
FROM
t t1
JOIN
t t2
ON (
t1.path[1:2] = t2.path[1:2]
AND
array_upper(t1.path,1) = 2
AND
array_upper(t2.path,1) > 2
)
GROUP BY t1.id;
ERROR: unrecognized node type: 203

Please apply the attached patch to help out with tab
completion in psql.

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

[GENERAL] PgDBF, Another XBase to PostgreSQL converter

It'd been a couple of years since I wrote a program to convert some
random database to PostgreSQL, and it seemed like time to crank out
another one. The results, PgDBF, are available under the GPLv3 and
downloadable from http://honeypot.net/project/pgdbf .

Why yet another program to convert XBase databases in general (and
FoxPro in particular) to PostgreSQL? Because the other ones I found
were incomplete, complex, slow, or not available on Unix-like
systems. We needed something that could run hourly to keep our legacy
database in sync with our new production system, and the program we'd
been using (XBaseToPg: http://honeypot.net/project/xbasetopg) was a
hack on another program that was never meant to be abused that way.

Seriously, PgDBF is fast and simple. I profiled it, optimized,
profiled, optimized, and profiled again until I couldn't find anything
else to tweak. And yet its speed comes primarily through its
simplicity, so it should be very easy to maintain.

Give it a try. It runs on everything Unixy that I had available for
testing (including Linux x86, FreeBSD x86-64, and OS X PPC). I'm
pretty pleased with how this turned out.
--
Kirk Strauser

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] FK check will use index on referring table?

Hi -

My understanding is that PG will use an index on the referring side
of a foreign key for FK checks. How can I tell whether it's doing
that? EXPLAIN ANALYZE just shows something like this:

=> explain analyze delete from segments where segmentid = 24305259;
QUERY PLAN
------------------------------------------------------------------------
-------------------------------------------------
Index Scan using segments_pkey on segments (cost=0.00..10.46
rows=1 width=6)
(actual
time=0.243..0.248 rows=1 loops=1)
Index Cond: (segmentid = 24305259)
Trigger for constraint $1: time=0.344 calls=1
Trigger for constraint $2: time=0.180 calls=1
Trigger for constraint $1: time=0.325 calls=1
Trigger for constraint tokenizedsegments_segmentid_fkey:
time=16910.357 calls=1
Total runtime: 16911.712 ms

tokenizedSegments.segmentID has an FK reference to
segments.segmentID, and there is an index (not UNIQUE) on the
referring column, but the extreme sloth of that last trigger suggests
it is not using it. Deferring doesn't matter (perhaps not surprising
on one delete).

How can I tell what the trigger is doing? I'm using 8.2.5 and I've
ANALYZED everything.

Thanks.

- John D. Burger
MITRE

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] php + postgresql

On Thursday 24 July 2008 12:41, admin wrote:
> 1.
> I ended up using pg_prepare() and pg_execute() as pg_query() alone just
> didn't seem to work. But SELECT statements seemed to be cached or
> persistent in some way, such that they "lived" beyond the life of the
> PHP script. Is there something I need to know about persistent behaviour
> in PG that doesn't exist in MySQL?

Not sure what causes this with your server but I always use something like
this, ie first connect then do your stuff and then close the connection:

require("dbconnect.inc"); // holds the $conn which is pg_connect("with
passes")

if (!$conn)
{exit("Database connection failed. Please try again." . $conn);}

$sql ="SELECT ...";

$product=pg_exec($conn,$sql);
if (!$product)
{exit("Database connection failed. Please try again.");}

while ($row = pg_fetch_row($product))
{
echo"

$row[1]

";
}

pg_close($conn);

BR,
--
Aarni

Burglars usually come in through your windows.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

On 7/24/08, Peter Eisentraut <peter_e@gmx.net> wrote:
> Am Wednesday, 23. July 2008 schrieb Marko Kreen:
> > And the idea to turn pgfoundry into CPAN
> > is pointless. An user may willing to throw random modules to his
> > random perl script, but not to his whole db architecture.
>
> Based on what reasoning?

Based on my own behaviour.

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] pg_dump additional options for performance

Index: doc/src/sgml/ref/pg_dump.sgml
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/doc/src/sgml/ref/pg_dump.sgml,v
retrieving revision 1.103
diff -c -r1.103 pg_dump.sgml
*** doc/src/sgml/ref/pg_dump.sgml 20 Jul 2008 18:43:30 -0000 1.103
--- doc/src/sgml/ref/pg_dump.sgml 24 Jul 2008 07:30:19 -0000
***************
*** 133,139 ****
<para>
Include large objects in the dump. This is the default behavior
except when <option>--schema</>, <option>--table</>, or
! <option>--schema-only</> is specified, so the <option>-b</>
switch is only useful to add large objects to selective dumps.
</para>
</listitem>
--- 133,140 ----
<para>
Include large objects in the dump. This is the default behavior
except when <option>--schema</>, <option>--table</>, or
! <option>--schema-only</> or <option>--schema-before-data</> or
! <option>--schema-after-data</> is specified, so the <option>-b</>
switch is only useful to add large objects to selective dumps.
</para>
</listitem>
***************
*** 426,431 ****
--- 427,452 ----
</varlistentry>

<varlistentry>
+ <term><option>--schema-before-data</option></term>
+ <listitem>
+ <para>
+ Dump object definitions (schema) that occur before table data,
+ using the order produced by a full dump.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>--schema-after-data</option></term>
+ <listitem>
+ <para>
+ Dump object definitions (schema) that occur after table data,
+ using the order produced by a full dump.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-S <replaceable class="parameter">username</replaceable></option></term>
<term><option>--superuser=<replaceable class="parameter">username</replaceable></option></term>
<listitem>
***************
*** 790,795 ****
--- 811,844 ----
</para>

<para>
+ The output of <application>pg_dump</application> can be divided into three parts:
+ <itemizedlist>
+ <listitem>
+ <para>
+ Before Data - objects output before data, which includes
+ <command>CREATE TABLE</command> statements and others.
+ This part can be requested using <option>--schema-before-data</>.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Table Data - data can be requested using <option>--data-only</>.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ After Data - objects output after data, which includes
+ <command>CREATE INDEX</command> statements and others.
+ This part can be requested using <option>--schema-after-data</>.
+ </para>
+ </listitem>
+ </itemizedlist>
+ This allows us to work more easily with large data dump files when
+ there is some need to edit commands or resequence their execution for
+ performance.
+ </para>
+
+ <para>
Because <application>pg_dump</application> is used to transfer data
to newer versions of <productname>PostgreSQL</>, the output of
<application>pg_dump</application> can be loaded into
Index: doc/src/sgml/ref/pg_restore.sgml
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/doc/src/sgml/ref/pg_restore.sgml,v
retrieving revision 1.75
diff -c -r1.75 pg_restore.sgml
*** doc/src/sgml/ref/pg_restore.sgml 13 Apr 2008 03:49:21 -0000 1.75
--- doc/src/sgml/ref/pg_restore.sgml 24 Jul 2008 07:30:19 -0000
***************
*** 321,326 ****
--- 321,346 ----
</varlistentry>

<varlistentry>
+ <term><option>--schema-before-data</option></term>
+ <listitem>
+ <para>
+ Restore object definitions (schema) that occur before table data,
+ using the order produced by a full restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>--schema-after-data</option></term>
+ <listitem>
+ <para>
+ Restore object definitions (schema) that occur after table data,
+ using the order produced by a full restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-S <replaceable class="parameter">username</replaceable></option></term>
<term><option>--superuser=<replaceable class="parameter">username</replaceable></option></term>
<listitem>
***************
*** 572,577 ****
--- 592,626 ----
</para>

<para>
+ The actions of <application>pg_restore</application> can be
+ divided into three parts:
+ <itemizedlist>
+ <listitem>
+ <para>
+ Before Data - objects output before data, which includes
+ <command>CREATE TABLE</command> statements and others.
+ This part can be requested using <option>--schema-before-data</>.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Table Data - data can be requested using <option>--data-only</>.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ After Data - objects output after data, which includes
+ <command>CREATE INDEX</command> statements and others.
+ This part can be requested using <option>--schema-after-data</>.
+ </para>
+ </listitem>
+ </itemizedlist>
+ This allows us to work more easily with large data dump files when
+ there is some need to edit commands or resequence their execution for
+ performance.
+ </para>
+
+ <para>
The limitations of <application>pg_restore</application> are detailed below.

<itemizedlist>
Index: src/bin/pg_dump/pg_backup.h
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/bin/pg_dump/pg_backup.h,v
retrieving revision 1.47
diff -c -r1.47 pg_backup.h
*** src/bin/pg_dump/pg_backup.h 13 Apr 2008 03:49:21 -0000 1.47
--- src/bin/pg_dump/pg_backup.h 24 Jul 2008 07:30:19 -0000
***************
*** 89,95 ****
int use_setsessauth;/* Use SET SESSION AUTHORIZATION commands
* instead of OWNER TO */
char *superuser; /* Username to use as superuser */
! int dataOnly;
int dropSchema;
char *filename;
int schemaOnly;
--- 89,95 ----
int use_setsessauth;/* Use SET SESSION AUTHORIZATION commands
* instead of OWNER TO */
char *superuser; /* Username to use as superuser */
! int dumpObjFlags; /* which objects types to dump */
int dropSchema;
char *filename;
int schemaOnly;
Index: src/bin/pg_dump/pg_backup_archiver.c
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/bin/pg_dump/pg_backup_archiver.c,v
retrieving revision 1.157
diff -c -r1.157 pg_backup_archiver.c
*** src/bin/pg_dump/pg_backup_archiver.c 4 May 2008 08:32:21 -0000 1.157
--- src/bin/pg_dump/pg_backup_archiver.c 24 Jul 2008 07:30:19 -0000
***************
*** 56,62 ****
static void _selectTablespace(ArchiveHandle *AH, const char *tablespace);
static void processEncodingEntry(ArchiveHandle *AH, TocEntry *te);
static void processStdStringsEntry(ArchiveHandle *AH, TocEntry *te);
! static teReqs _tocEntryRequired(TocEntry *te, RestoreOptions *ropt, bool include_acls);
static void _disableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
static void _enableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
static TocEntry *getTocEntryByDumpId(ArchiveHandle *AH, DumpId id);
--- 56,62 ----
static void _selectTablespace(ArchiveHandle *AH, const char *tablespace);
static void processEncodingEntry(ArchiveHandle *AH, TocEntry *te);
static void processStdStringsEntry(ArchiveHandle *AH, TocEntry *te);
! static int _tocEntryRequired(TocEntry *te, RestoreOptions *ropt, bool include_acls);
static void _disableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
static void _enableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
static TocEntry *getTocEntryByDumpId(ArchiveHandle *AH, DumpId id);
***************
*** 129,135 ****
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
! teReqs reqs;
OutputContext sav;
bool defnDumped;

--- 129,135 ----
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
! int reqs;
OutputContext sav;
bool defnDumped;

***************
*** 175,193 ****
* Work out if we have an implied data-only restore. This can happen if
* the dump was data only or if the user has used a toc list to exclude
* all of the schema data. All we do is look for schema entries - if none
! * are found then we set the dataOnly flag.
*
! * We could scan for wanted TABLE entries, but that is not the same as
! * dataOnly. At this stage, it seems unnecessary (6-Mar-2001).
*/
! if (!ropt->dataOnly)
{
int impliedDataOnly = 1;

for (te = AH->toc->next; te != AH->toc; te = te->next)
{
reqs = _tocEntryRequired(te, ropt, true);
! if ((reqs & REQ_SCHEMA) != 0)
{ /* It's schema, and it's wanted */
impliedDataOnly = 0;
break;
--- 175,193 ----
* Work out if we have an implied data-only restore. This can happen if
* the dump was data only or if the user has used a toc list to exclude
* all of the schema data. All we do is look for schema entries - if none
! * are found then say we only want DATA type objects.
*
! * We could scan for wanted TABLE entries, but that is not the same.
! * At this stage, it seems unnecessary (6-Mar-2001).
*/
! if (!WANT_DATA(ropt->dumpObjFlags))
{
int impliedDataOnly = 1;

for (te = AH->toc->next; te != AH->toc; te = te->next)
{
reqs = _tocEntryRequired(te, ropt, true);
! if (WANT_SCHEMA_BEFORE_DATA(reqs) || WANT_SCHEMA_AFTER_DATA(reqs))
{ /* It's schema, and it's wanted */
impliedDataOnly = 0;
break;
***************
*** 195,201 ****
}
if (impliedDataOnly)
{
! ropt->dataOnly = impliedDataOnly;
ahlog(AH, 1, "implied data-only restore\n");
}
}
--- 195,201 ----
}
if (impliedDataOnly)
{
! ropt->dumpObjFlags = REQ_DATA;
ahlog(AH, 1, "implied data-only restore\n");
}
}
***************
*** 236,242 ****
AH->currentTE = te;

reqs = _tocEntryRequired(te, ropt, false /* needn't drop ACLs */ );
! if (((reqs & REQ_SCHEMA) != 0) && te->dropStmt)
{
/* We want the schema */
ahlog(AH, 1, "dropping %s %s\n", te->desc, te->tag);
--- 236,242 ----
AH->currentTE = te;

reqs = _tocEntryRequired(te, ropt, false /* needn't drop ACLs */ );
! if (((reqs & REQ_SCHEMA_BEFORE_DATA) != 0) && te->dropStmt)
{
/* We want the schema */
ahlog(AH, 1, "dropping %s %s\n", te->desc, te->tag);
***************
*** 278,284 ****
/* Dump any relevant dump warnings to stderr */
if (!ropt->suppressDumpWarnings && strcmp(te->desc, "WARNING") == 0)
{
! if (!ropt->dataOnly && te->defn != NULL && strlen(te->defn) != 0)
write_msg(modulename, "warning from original dump file: %s\n", te->defn);
else if (te->copyStmt != NULL && strlen(te->copyStmt) != 0)
write_msg(modulename, "warning from original dump file: %s\n", te->copyStmt);
--- 278,284 ----
/* Dump any relevant dump warnings to stderr */
if (!ropt->suppressDumpWarnings && strcmp(te->desc, "WARNING") == 0)
{
! if (!WANT_DATA(ropt->dumpObjFlags) && te->defn != NULL && strlen(te->defn) != 0)
write_msg(modulename, "warning from original dump file: %s\n", te->defn);
else if (te->copyStmt != NULL && strlen(te->copyStmt) != 0)
write_msg(modulename, "warning from original dump file: %s\n", te->copyStmt);
***************
*** 286,292 ****

defnDumped = false;

! if ((reqs & REQ_SCHEMA) != 0) /* We want the schema */
{
ahlog(AH, 1, "creating %s %s\n", te->desc, te->tag);

--- 286,293 ----

defnDumped = false;

! if ((WANT_SCHEMA_BEFORE_DATA(reqs) && WANT_SCHEMA_BEFORE_DATA(ropt->dumpObjFlags)) ||
! (WANT_SCHEMA_AFTER_DATA(reqs) && WANT_SCHEMA_AFTER_DATA(ropt->dumpObjFlags))) /* We want the schema */
{
ahlog(AH, 1, "creating %s %s\n", te->desc, te->tag);

***************
*** 331,337 ****
/*
* If we have a data component, then process it
*/
! if ((reqs & REQ_DATA) != 0)
{
/*
* hadDumper will be set if there is genuine data component for
--- 332,338 ----
/*
* If we have a data component, then process it
*/
! if (WANT_DATA(reqs))
{
/*
* hadDumper will be set if there is genuine data component for
***************
*** 343,349 ****
/*
* If we can output the data, then restore it.
*/
! if (AH->PrintTocDataPtr !=NULL && (reqs & REQ_DATA) != 0)
{
#ifndef HAVE_LIBZ
if (AH->compression != 0)
--- 344,350 ----
/*
* If we can output the data, then restore it.
*/
! if (AH->PrintTocDataPtr !=NULL && WANT_DATA(reqs))
{
#ifndef HAVE_LIBZ
if (AH->compression != 0)
***************
*** 415,421 ****
/* Work out what, if anything, we want from this entry */
reqs = _tocEntryRequired(te, ropt, true);

! if ((reqs & REQ_SCHEMA) != 0) /* We want the schema */
{
ahlog(AH, 1, "setting owner and privileges for %s %s\n",
te->desc, te->tag);
--- 416,422 ----
/* Work out what, if anything, we want from this entry */
reqs = _tocEntryRequired(te, ropt, true);

! if (WANT_SCHEMA_BEFORE_DATA(reqs)) /* We want the schema */
{
ahlog(AH, 1, "setting owner and privileges for %s %s\n",
te->desc, te->tag);
***************
*** 473,479 ****
_disableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
{
/* This hack is only needed in a data-only restore */
! if (!ropt->dataOnly || !ropt->disable_triggers)
return;

ahlog(AH, 1, "disabling triggers for %s\n", te->tag);
--- 474,480 ----
_disableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
{
/* This hack is only needed in a data-only restore */
! if (!WANT_DATA(ropt->dumpObjFlags) || !ropt->disable_triggers)
return;

ahlog(AH, 1, "disabling triggers for %s\n", te->tag);
***************
*** 499,505 ****
_enableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
{
/* This hack is only needed in a data-only restore */
! if (!ropt->dataOnly || !ropt->disable_triggers)
return;

ahlog(AH, 1, "enabling triggers for %s\n", te->tag);
--- 500,506 ----
_enableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
{
/* This hack is only needed in a data-only restore */
! if (!WANT_DATA(ropt->dumpObjFlags) || !ropt->disable_triggers)
return;

ahlog(AH, 1, "enabling triggers for %s\n", te->tag);
***************
*** 1321,1327 ****
return NULL;
}

! teReqs
TocIDRequired(ArchiveHandle *AH, DumpId id, RestoreOptions *ropt)
{
TocEntry *te = getTocEntryByDumpId(AH, id);
--- 1322,1328 ----
return NULL;
}

! int
TocIDRequired(ArchiveHandle *AH, DumpId id, RestoreOptions *ropt)
{
TocEntry *te = getTocEntryByDumpId(AH, id);
***************
*** 2026,2035 ****
te->defn);
}

! static teReqs
_tocEntryRequired(TocEntry *te, RestoreOptions *ropt, bool include_acls)
{
! teReqs res = REQ_ALL;

/* ENCODING and STDSTRINGS items are dumped specially, so always reject */
if (strcmp(te->desc, "ENCODING") == 0 ||
--- 2027,2036 ----
te->defn);
}

! static int
_tocEntryRequired(TocEntry *te, RestoreOptions *ropt, bool include_acls)
{
! int res = ropt->dumpObjFlags;

/* ENCODING and STDSTRINGS items are dumped specially, so always reject */
if (strcmp(te->desc, "ENCODING") == 0 ||
***************
*** 2109,2125 ****
if ((strcmp(te->desc, "<Init>") == 0) && (strcmp(te->tag, "Max OID") == 0))
return 0;

- /* Mask it if we only want schema */
- if (ropt->schemaOnly)
- res = res & REQ_SCHEMA;
-
- /* Mask it we only want data */
- if (ropt->dataOnly)
- res = res & REQ_DATA;
-
/* Mask it if we don't have a schema contribution */
if (!te->defn || strlen(te->defn) == 0)
! res = res & ~REQ_SCHEMA;

/* Finally, if there's a per-ID filter, limit based on that as well */
if (ropt->idWanted && !ropt->idWanted[te->dumpId - 1])
--- 2110,2118 ----
if ((strcmp(te->desc, "<Init>") == 0) && (strcmp(te->tag, "Max OID") == 0))
return 0;

/* Mask it if we don't have a schema contribution */
if (!te->defn || strlen(te->defn) == 0)
! res = res & ~(REQ_SCHEMA_BEFORE_DATA | REQ_SCHEMA_AFTER_DATA);

/* Finally, if there's a per-ID filter, limit based on that as well */
if (ropt->idWanted && !ropt->idWanted[te->dumpId - 1])
Index: src/bin/pg_dump/pg_backup_archiver.h
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/bin/pg_dump/pg_backup_archiver.h,v
retrieving revision 1.76
diff -c -r1.76 pg_backup_archiver.h
*** src/bin/pg_dump/pg_backup_archiver.h 7 Nov 2007 12:24:24 -0000 1.76
--- src/bin/pg_dump/pg_backup_archiver.h 24 Jul 2008 07:30:19 -0000
***************
*** 158,169 ****
STAGE_FINALIZING
} ArchiverStage;

! typedef enum
! {
! REQ_SCHEMA = 1,
! REQ_DATA = 2,
! REQ_ALL = REQ_SCHEMA + REQ_DATA
! } teReqs;

typedef struct _archiveHandle
{
--- 158,173 ----
STAGE_FINALIZING
} ArchiverStage;

! #define REQ_SCHEMA_BEFORE_DATA (1 << 0)
! #define REQ_DATA (1 << 1)
! #define REQ_SCHEMA_AFTER_DATA (1 << 2)
! #define REQ_ALL (REQ_SCHEMA_BEFORE_DATA + REQ_DATA + REQ_SCHEMA_AFTER_DATA)
!
! #define WANT_SCHEMA_BEFORE_DATA(req) ((req & REQ_SCHEMA_BEFORE_DATA) == REQ_SCHEMA_BEFORE_DATA)
! #define WANT_DATA(req) ((req & REQ_DATA) == REQ_DATA)
! #define WANT_SCHEMA_AFTER_DATA(req) ((req & REQ_SCHEMA_AFTER_DATA) == REQ_SCHEMA_AFTER_DATA)
! #define WANT_ALL(req) ((req & REQ_ALL) == REQ_ALL)
!

typedef struct _archiveHandle
{
***************
*** 317,323 ****
extern void ReadToc(ArchiveHandle *AH);
extern void WriteDataChunks(ArchiveHandle *AH);

! extern teReqs TocIDRequired(ArchiveHandle *AH, DumpId id, RestoreOptions *ropt);
extern bool checkSeek(FILE *fp);

#define appendStringLiteralAHX(buf,str,AH) \
--- 321,327 ----
extern void ReadToc(ArchiveHandle *AH);
extern void WriteDataChunks(ArchiveHandle *AH);

! extern int TocIDRequired(ArchiveHandle *AH, DumpId id, RestoreOptions *ropt);
extern bool checkSeek(FILE *fp);

#define appendStringLiteralAHX(buf,str,AH) \
Index: src/bin/pg_dump/pg_dump.c
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/bin/pg_dump/pg_dump.c,v
retrieving revision 1.497
diff -c -r1.497 pg_dump.c
*** src/bin/pg_dump/pg_dump.c 20 Jul 2008 18:43:30 -0000 1.497
--- src/bin/pg_dump/pg_dump.c 24 Jul 2008 07:35:28 -0000
***************
*** 73,78 ****
--- 73,82 ----
bool aclsSkip;
const char *lockWaitTimeout;

+ /* groups of objects: default is we dump all groups */
+
+ int dumpObjFlags;
+
/* subquery used to convert user ID (eg, datdba) to user name */
static const char *username_subquery;

***************
*** 227,232 ****
--- 231,238 ----
static int disable_triggers = 0;
static int outputNoTablespaces = 0;
static int use_setsessauth = 0;
+ static int schemaBeforeData;
+ static int schemaAfterData;

static struct option long_options[] = {
{"data-only", no_argument, NULL, 'a'},
***************
*** 267,272 ****
--- 273,280 ----
{"disable-triggers", no_argument, &disable_triggers, 1},
{"lock-wait-timeout", required_argument, NULL, 2},
{"no-tablespaces", no_argument, &outputNoTablespaces, 1},
+ {"schema-before-data", no_argument, &schemaBeforeData, 1},
+ {"schema-after-data", no_argument, &schemaAfterData, 1},
{"use-set-session-authorization", no_argument, &use_setsessauth, 1},

{NULL, 0, NULL, 0}
***************
*** 420,425 ****
--- 428,437 ----
disable_triggers = 1;
else if (strcmp(optarg, "no-tablespaces") == 0)
outputNoTablespaces = 1;
+ else if (strcmp(optarg, "schema-before-data") == 0)
+ schemaBeforeData = 1;
+ else if (strcmp(optarg, "schema-after-data") == 0)
+ schemaAfterData = 1;
else if (strcmp(optarg, "use-set-session-authorization") == 0)
use_setsessauth = 1;
else
***************
*** 464,474 ****
if (optind < argc)
dbname = argv[optind];

! if (dataOnly && schemaOnly)
{
! write_msg(NULL, "options -s/--schema-only and -a/--data-only cannot be used together\n");
exit(1);
}

if (dataOnly && outputClean)
{
--- 476,521 ----
if (optind < argc)
dbname = argv[optind];

! /*
! * Look for conflicting options relating to object groupings
! */
! if (schemaOnly && dataOnly)
! {
! write_msg(NULL, "options %s and %s cannot be used together\n",
! "-s/--schema-only", "-a/--data-only");
! exit(1);
! }
! else if ((schemaOnly || dataOnly) &&
! (schemaBeforeData || schemaAfterData))
{
! write_msg(NULL, "options %s and %s cannot be used together\n",
! schemaOnly ? "-s/--schema-only" : "-a/--data-only",
! schemaBeforeData ? "--schema-before-data" : "--schema-after-data");
exit(1);
}
+ else if (schemaBeforeData && schemaAfterData)
+ {
+ write_msg(NULL, "options %s and %s cannot be used together\n",
+ "--schema-before-data", "--schema-after-data");
+ exit(1);
+ }
+
+ /*
+ * Decide which of the object groups we will dump
+ */
+ dumpObjFlags = REQ_ALL;
+
+ if (dataOnly)
+ dumpObjFlags = REQ_DATA;
+
+ if (schemaBeforeData)
+ dumpObjFlags = REQ_SCHEMA_BEFORE_DATA;
+
+ if (schemaAfterData)
+ dumpObjFlags = REQ_SCHEMA_AFTER_DATA;
+
+ if (schemaOnly)
+ dumpObjFlags = (REQ_SCHEMA_BEFORE_DATA | REQ_SCHEMA_AFTER_DATA);

if (dataOnly && outputClean)
{
***************
*** 646,652 ****
* Dumping blobs is now default unless we saw an inclusion switch or -s
* ... but even if we did see one of these, -b turns it back on.
*/
! if (include_everything && !schemaOnly)
outputBlobs = true;

/*
--- 693,699 ----
* Dumping blobs is now default unless we saw an inclusion switch or -s
* ... but even if we did see one of these, -b turns it back on.
*/
! if (include_everything && WANT_DATA(dumpObjFlags))
outputBlobs = true;

/*
***************
*** 658,664 ****
if (g_fout->remoteVersion < 80400)
guessConstraintInheritance(tblinfo, numTables);

! if (!schemaOnly)
getTableData(tblinfo, numTables, oids);

if (outputBlobs && hasBlobs(g_fout))
--- 705,711 ----
if (g_fout->remoteVersion < 80400)
guessConstraintInheritance(tblinfo, numTables);

! if (WANT_DATA(dumpObjFlags))
getTableData(tblinfo, numTables, oids);

if (outputBlobs && hasBlobs(g_fout))
***************
*** 712,718 ****
dumpStdStrings(g_fout);

/* The database item is always next, unless we don't want it at all */
! if (include_everything && !dataOnly)
dumpDatabase(g_fout);

/* Now the rearrangeable objects. */
--- 759,765 ----
dumpStdStrings(g_fout);

/* The database item is always next, unless we don't want it at all */
! if (include_everything && WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
dumpDatabase(g_fout);

/* Now the rearrangeable objects. */
***************
*** 734,740 ****
ropt->noTablespace = outputNoTablespaces;
ropt->disable_triggers = disable_triggers;
ropt->use_setsessauth = use_setsessauth;
! ropt->dataOnly = dataOnly;

if (compressLevel == -1)
ropt->compression = 0;
--- 781,787 ----
ropt->noTablespace = outputNoTablespaces;
ropt->disable_triggers = disable_triggers;
ropt->use_setsessauth = use_setsessauth;
! ropt->dumpObjFlags = dumpObjFlags;

if (compressLevel == -1)
ropt->compression = 0;
***************
*** 792,797 ****
--- 839,846 ----
printf(_(" --disable-dollar-quoting disable dollar quoting, use SQL standard quoting\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --no-tablespaces do not dump tablespace assignments\n"));
+ printf(_(" --schema-before-data dump only the part of schema before table data\n"));
+ printf(_(" --schema-after-data dump only the part of schema after table data\n"));
printf(_(" --use-set-session-authorization\n"
" use SESSION AUTHORIZATION commands instead of\n"
" ALTER OWNER commands to set ownership\n"));
***************
*** 5165,5171 ****
int ncomments;

/* Comments are SCHEMA not data */
! if (dataOnly)
return;

/* Search for comments associated with catalogId, using table */
--- 5214,5220 ----
int ncomments;

/* Comments are SCHEMA not data */
! if (!WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/* Search for comments associated with catalogId, using table */
***************
*** 5216,5222 ****
PQExpBuffer target;

/* Comments are SCHEMA not data */
! if (dataOnly)
return;

/* Search for comments associated with relation, using table */
--- 5265,5271 ----
PQExpBuffer target;

/* Comments are SCHEMA not data */
! if (!WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/* Search for comments associated with relation, using table */
***************
*** 5568,5574 ****
char *qnspname;

/* Skip if not to be dumped */
! if (!nspinfo->dobj.dump || dataOnly)
return;

/* don't dump dummy namespace from pre-7.3 source */
--- 5617,5623 ----
char *qnspname;

/* Skip if not to be dumped */
! if (!nspinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/* don't dump dummy namespace from pre-7.3 source */
***************
*** 5617,5623 ****
dumpType(Archive *fout, TypeInfo *tinfo)
{
/* Skip if not to be dumped */
! if (!tinfo->dobj.dump || dataOnly)
return;

/* Dump out in proper style */
--- 5666,5672 ----
dumpType(Archive *fout, TypeInfo *tinfo)
{
/* Skip if not to be dumped */
! if (!tinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/* Dump out in proper style */
***************
*** 5646,5651 ****
--- 5695,5704 ----
i;
char *label;

+ /* Skip if not to be dumped */
+ if (!tinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
+ return;
+
/* Set proper schema search path so regproc references list correctly */
selectSourceSchema(tinfo->dobj.namespace->dobj.name);

***************
*** 6262,6268 ****
PQExpBuffer q;

/* Skip if not to be dumped */
! if (!stinfo->dobj.dump || dataOnly)
return;

q = createPQExpBuffer();
--- 6315,6321 ----
PQExpBuffer q;

/* Skip if not to be dumped */
! if (!stinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 6309,6315 ****
if (!include_everything)
return false;
/* And they're schema not data */
! if (dataOnly)
return false;
return true;
}
--- 6362,6368 ----
if (!include_everything)
return false;
/* And they're schema not data */
! if (!WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return false;
return true;
}
***************
*** 6330,6336 ****
FuncInfo *funcInfo;
FuncInfo *validatorInfo = NULL;

! if (dataOnly)
return;

/*
--- 6383,6389 ----
FuncInfo *funcInfo;
FuncInfo *validatorInfo = NULL;

! if (!WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/*
***************
*** 6590,6596 ****
int i;

/* Skip if not to be dumped */
! if (!finfo->dobj.dump || dataOnly)
return;

query = createPQExpBuffer();
--- 6643,6649 ----
int i;

/* Skip if not to be dumped */
! if (!finfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

query = createPQExpBuffer();
***************
*** 6985,6991 ****
TypeInfo *sourceInfo;
TypeInfo *targetInfo;

! if (dataOnly)
return;

if (OidIsValid(cast->castfunc))
--- 7038,7044 ----
TypeInfo *sourceInfo;
TypeInfo *targetInfo;

! if (!WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

if (OidIsValid(cast->castfunc))
***************
*** 7135,7141 ****
char *oprcanhash;

/* Skip if not to be dumped */
! if (!oprinfo->dobj.dump || dataOnly)
return;

/*
--- 7188,7194 ----
char *oprcanhash;

/* Skip if not to be dumped */
! if (!oprinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/*
***************
*** 7519,7525 ****
int i;

/* Skip if not to be dumped */
! if (!opcinfo->dobj.dump || dataOnly)
return;

/*
--- 7572,7578 ----
int i;

/* Skip if not to be dumped */
! if (!opcinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/*
***************
*** 7827,7833 ****
int i;

/* Skip if not to be dumped */
! if (!opfinfo->dobj.dump || dataOnly)
return;

/*
--- 7880,7886 ----
int i;

/* Skip if not to be dumped */
! if (!opfinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/*
***************
*** 8096,8102 ****
bool condefault;

/* Skip if not to be dumped */
! if (!convinfo->dobj.dump || dataOnly)
return;

query = createPQExpBuffer();
--- 8149,8155 ----
bool condefault;

/* Skip if not to be dumped */
! if (!convinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

query = createPQExpBuffer();
***************
*** 8250,8256 ****
bool convertok;

/* Skip if not to be dumped */
! if (!agginfo->aggfn.dobj.dump || dataOnly)
return;

query = createPQExpBuffer();
--- 8303,8309 ----
bool convertok;

/* Skip if not to be dumped */
! if (!agginfo->aggfn.dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

query = createPQExpBuffer();
***************
*** 8453,8459 ****
PQExpBuffer delq;

/* Skip if not to be dumped */
! if (!prsinfo->dobj.dump || dataOnly)
return;

q = createPQExpBuffer();
--- 8506,8512 ----
PQExpBuffer delq;

/* Skip if not to be dumped */
! if (!prsinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 8522,8528 ****
char *tmplname;

/* Skip if not to be dumped */
! if (!dictinfo->dobj.dump || dataOnly)
return;

q = createPQExpBuffer();
--- 8575,8581 ----
char *tmplname;

/* Skip if not to be dumped */
! if (!dictinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 8607,8613 ****
PQExpBuffer delq;

/* Skip if not to be dumped */
! if (!tmplinfo->dobj.dump || dataOnly)
return;

q = createPQExpBuffer();
--- 8660,8666 ----
PQExpBuffer delq;

/* Skip if not to be dumped */
! if (!tmplinfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 8673,8679 ****
int i_dictname;

/* Skip if not to be dumped */
! if (!cfginfo->dobj.dump || dataOnly)
return;

q = createPQExpBuffer();
--- 8726,8732 ----
int i_dictname;

/* Skip if not to be dumped */
! if (!cfginfo->dobj.dump || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 8809,8815 ****
PQExpBuffer sql;

/* Do nothing if ACL dump is not enabled */
! if (dataOnly || aclsSkip)
return;

sql = createPQExpBuffer();
--- 8862,8868 ----
PQExpBuffer sql;

/* Do nothing if ACL dump is not enabled */
! if (!WANT_SCHEMA_BEFORE_DATA(dumpObjFlags) || aclsSkip)
return;

sql = createPQExpBuffer();
***************
*** 8846,8852 ****
{
if (tbinfo->relkind == RELKIND_SEQUENCE)
dumpSequence(fout, tbinfo);
! else if (!dataOnly)
dumpTableSchema(fout, tbinfo);

/* Handle the ACL here */
--- 8899,8905 ----
{
if (tbinfo->relkind == RELKIND_SEQUENCE)
dumpSequence(fout, tbinfo);
! else if (WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
dumpTableSchema(fout, tbinfo);

/* Handle the ACL here */
***************
*** 9153,9159 ****
PQExpBuffer delq;

/* Only print it if "separate" mode is selected */
! if (!tbinfo->dobj.dump || !adinfo->separate || dataOnly)
return;

/* Don't print inherited defaults, either */
--- 9206,9212 ----
PQExpBuffer delq;

/* Only print it if "separate" mode is selected */
! if (!tbinfo->dobj.dump || !adinfo->separate || !WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
return;

/* Don't print inherited defaults, either */
***************
*** 9238,9244 ****
PQExpBuffer q;
PQExpBuffer delq;

! if (dataOnly)
return;

q = createPQExpBuffer();
--- 9291,9297 ----
PQExpBuffer q;
PQExpBuffer delq;

! if (!WANT_SCHEMA_AFTER_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 9307,9313 ****
PQExpBuffer delq;

/* Skip if not to be dumped */
! if (!coninfo->dobj.dump || dataOnly)
return;

q = createPQExpBuffer();
--- 9360,9366 ----
PQExpBuffer delq;

/* Skip if not to be dumped */
! if (!coninfo->dobj.dump || !WANT_SCHEMA_AFTER_DATA(dumpObjFlags))
return;

q = createPQExpBuffer();
***************
*** 9700,9706 ****
*
* Add a 'SETVAL(seq, last_val, iscalled)' as part of a "data" dump.
*/
! if (!dataOnly)
{
resetPQExpBuffer(delqry);

--- 9753,9759 ----
*
* Add a 'SETVAL(seq, last_val, iscalled)' as part of a "data" dump.
*/
! if (WANT_SCHEMA_BEFORE_DATA(dumpObjFlags))
{
resetPQExpBuffer(delqry);

***************
*** 9803,9809 ****
tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);
}

! if (!schemaOnly)
{
resetPQExpBuffer(query);
appendPQExpBuffer(query, "SELECT pg_catalog.setval(");
--- 9856,9862 ----
tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);
}

! if (WANT_DATA(dumpObjFlags))
{
resetPQExpBuffer(query);
appendPQExpBuffer(query, "SELECT pg_catalog.setval(");
***************
*** 9836,9842 ****
const char *p;
int findx;

! if (dataOnly)
return;

query = createPQExpBuffer();
--- 9889,9895 ----
const char *p;
int findx;

! if (!WANT_SCHEMA_AFTER_DATA(dumpObjFlags))
return;

query = createPQExpBuffer();
***************
*** 10044,10050 ****
PGresult *res;

/* Skip if not to be dumped */
! if (!rinfo->dobj.dump || dataOnly)
return;

/*
--- 10097,10103 ----
PGresult *res;

/* Skip if not to be dumped */
! if (!rinfo->dobj.dump || !WANT_SCHEMA_AFTER_DATA(dumpObjFlags))
return;

/*
Index: src/bin/pg_dump/pg_restore.c
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/bin/pg_dump/pg_restore.c,v
retrieving revision 1.88
diff -c -r1.88 pg_restore.c
*** src/bin/pg_dump/pg_restore.c 13 Apr 2008 03:49:22 -0000 1.88
--- src/bin/pg_dump/pg_restore.c 24 Jul 2008 07:30:19 -0000
***************
*** 78,83 ****
--- 78,90 ----
static int no_data_for_failed_tables = 0;
static int outputNoTablespaces = 0;
static int use_setsessauth = 0;
+ bool dataOnly = false;
+ bool schemaOnly = false;
+
+ static int schemaBeforeData;
+ static int schemaAfterData;
+
+ int dumpObjFlags;

struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
***************
*** 114,119 ****
--- 121,128 ----
{"disable-triggers", no_argument, &disable_triggers, 1},
{"no-data-for-failed-tables", no_argument, &no_data_for_failed_tables, 1},
{"no-tablespaces", no_argument, &outputNoTablespaces, 1},
+ {"schema-before-data", no_argument, &schemaBeforeData, 1},
+ {"schema-after-data", no_argument, &schemaAfterData, 1},
{"use-set-session-authorization", no_argument, &use_setsessauth, 1},

{NULL, 0, NULL, 0}
***************
*** 145,151 ****
switch (c)
{
case 'a': /* Dump data only */
! opts->dataOnly = 1;
break;
case 'c': /* clean (i.e., drop) schema prior to create */
opts->dropSchema = 1;
--- 154,160 ----
switch (c)
{
case 'a': /* Dump data only */
! dataOnly = true;
break;
case 'c': /* clean (i.e., drop) schema prior to create */
opts->dropSchema = 1;
***************
*** 213,219 ****
opts->triggerNames = strdup(optarg);
break;
case 's': /* dump schema only */
! opts->schemaOnly = 1;
break;
case 'S': /* Superuser username */
if (strlen(optarg) != 0)
--- 222,228 ----
opts->triggerNames = strdup(optarg);
break;
case 's': /* dump schema only */
! schemaOnly = true;
break;
case 'S': /* Superuser username */
if (strlen(optarg) != 0)
***************
*** 249,254 ****
--- 258,267 ----
no_data_for_failed_tables = 1;
else if (strcmp(optarg, "no-tablespaces") == 0)
outputNoTablespaces = 1;
+ else if (strcmp(optarg, "schema-before-data") == 0)
+ schemaBeforeData = 1;
+ else if (strcmp(optarg, "schema-after-data") == 0)
+ schemaAfterData = 1;
else if (strcmp(optarg, "use-set-session-authorization") == 0)
use_setsessauth = 1;
else
***************
*** 295,300 ****
--- 308,354 ----
opts->useDB = 1;
}

+ /*
+ * Look for conflicting options relating to object groupings
+ */
+ if (schemaOnly && dataOnly)
+ {
+ write_msg(NULL, "options %s and %s cannot be used together\n",
+ "-s/--schema-only", "-a/--data-only");
+ exit(1);
+ }
+ else if ((schemaOnly || dataOnly) &&
+ (schemaBeforeData || schemaAfterData))
+ {
+ write_msg(NULL, "options %s and %s cannot be used together\n",
+ schemaOnly ? "-s/--schema-only" : "-a/--data-only",
+ schemaBeforeData ? "--schema-before-data" : "--schema-after-data");
+ exit(1);
+ }
+ else if (schemaBeforeData && schemaAfterData)
+ {
+ write_msg(NULL, "options %s and %s cannot be used together\n",
+ "--schema-before-data", "--schema-after-data");
+ exit(1);
+ }
+
+ /*
+ * Decide which of the object groups we will dump
+ */
+ dumpObjFlags = REQ_ALL;
+
+ if (dataOnly)
+ dumpObjFlags = REQ_DATA;
+
+ if (schemaBeforeData)
+ dumpObjFlags = REQ_SCHEMA_BEFORE_DATA;
+
+ if (schemaAfterData)
+ dumpObjFlags = REQ_SCHEMA_AFTER_DATA;
+
+ if (schemaOnly)
+ dumpObjFlags = (REQ_SCHEMA_BEFORE_DATA | REQ_SCHEMA_AFTER_DATA);
+
opts->disable_triggers = disable_triggers;
opts->noDataForFailedTables = no_data_for_failed_tables;
opts->noTablespace = outputNoTablespaces;
***************
*** 405,410 ****
--- 459,466 ----
" do not restore data of tables that could not be\n"
" created\n"));
printf(_(" --no-tablespaces do not dump tablespace assignments\n"));
+ printf(_(" --schema-before-data dump only the part of schema before table data\n"));
+ printf(_(" --schema-after-data dump only the part of schema after table data\n"));
printf(_(" --use-set-session-authorization\n"
" use SESSION AUTHORIZATION commands instead of\n"
" OWNER TO commands\n"));
On Thu, 2008-07-24 at 03:54 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > [80k patch]
>
> Surely there is a whole lot of unintended noise in this patch?
> I certainly don't believe that you meant to change keywords.c
> for instance.

Removed, thanks.

Unrelated to this patch, it seems I have some issues with my repository,
judging by this and another unrelated issue reported by Martin Zaun.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support

Re: [GENERAL] Full text index without accents

Finally I create a function like:

CREATE OR REPLACE FUNCTION nonsensible (text) RETURNS text AS $$
DECLARE
var1 varchar;
BEGIN
var1=replace($1, 'á', 'a');
var1=replace(var1, 'é', 'e');
var1=replace(var1, 'í', 'i');
var1=replace(var1, 'ó', 'o');
var1=replace(var1, 'ú', 'u');
var1=replace(var1, 'Á', 'A');
var1=replace(var1, 'É', 'E');
var1=replace(var1, 'Í', 'I');
var1=replace(var1, 'Ó', 'O');
var1=replace(var1, 'Ú', 'U');
return var1;
END
$$LANGUAGE plpgsql immutable;

Then, create text indexes, one for sensible queries and other for
unsensible ones:

CREATE INDEX textindex ON document USING
gin(to_tsvector('spanish',text));

CREATE INDEX textindexn ON document USING
gin(to_tsvector('spanish',nonsensible(text)));

And then make a query sensible or unsensible to accents doing:

SELECT id FROM document WHERE to_tsvector('spanish',text) @@
to_tsquery('spanish','word_with_accent');

or:

SELECT id FROM document WHERE to_tsvector('spanish',nonsensible(text))
@@ to_tsquery('spanish',nonsensible('word_with_accent'));
respectively.

I think postgreSQL uses both indexes as necessary. I believe to remember
reading something about it in the documentation.

Thank you very much,

Mario Barcala


> Here is an example
>
> CREATE FUNCTION dropatsymbol(text) RETURNS text
> AS 'select replace($1, ''@'', '' '');'
> LANGUAGE SQL;
>
> arxiv=# select to_tsvector('english',dropatsymbol('oleg@sai.msu.su'));
> to_tsvector
> -------------------------
> 'oleg':1 'sai.msu.su':2

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] how to fix problem then when two queries run at the same time, it takes longer to complete then if run in sequence

Miernik wrote:
> Scott Marlowe <scott.marlowe@gmail.com> wrote:
>> won't ever run into. Why such an incredibly limited virtual machine?
>> Even my cell phone came with 256 meg built in two years ago.
>
> Because I don't want to spend too much money on the machine rent, and a
> 48 MB RAM Xen is about all I can get with a budget of 100$ per year.
[snip]
> My DB has several tables with like 100000 to 1 million rows each,
> running sorts, joins, updates etc on them several times per hour.
> About 10000 inserts and selects each hour, the whole DB takes 1.5 GB on
> disk now, 500 MB dumped.
>
> If I could shorten the time it takes to run each query by a factor of 3
> that's something worth going for.

Firstly, congratulations on providing quite a large database on such a
limited system. I think most people on such plans have tables with a few
hundred to a thousand rows in them, not a million. Many of the people
here are used to budgets a hundred or a thousand times of yours, so bear
in mind you're as much an expert as them :-)

If you're going to get the most out of this, you'll want to set up your
own Xen virtual machine on a local system so you can test changes.
You'll be trading your time against the budget, so bear that in mind.

If you know other small organisations locally in a similar position
perhaps consider sharing a physical machine and managing Xen yourselves
- that can be cheaper.

Changes

First step is to make sure you're running version 8.3 - there are some
useful improvements there that reduce the size of shorter text fields,
as well as the synchronised scans Albert mentions below.

Second step is to make turn off any other processes you don't need. Tune
down the number of consoles, apache processes, mail processes etc.
Normally not worth the trouble, but getting another couple of MB is
worthwhile in your case. Might be worth turning off autovacuum and
running a manual vacuum full overnight if your database is mostly reads.

Finally, I think it's worth looking at pgpool or pgbouncer (as Alvaro
said) and set them to allow only one connection in the pool. I know that
pgbouncer offers per-transaction connection sharing which will make this
more practical. Even so, it will help if your application can co-operate
by closing the connection as soon as possible.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] issues/experience with building postgres on Windows

Tom Lane wrote:
> Andrew Dunstan <andrew@dunslane.net> writes:
>> Martin Zaun wrote:
>>> - issues locating the 14 required software packages:
>>> - no luck getting Bison 1.875 or 2.2 Windows binaries
>
>> bison 1.875 is available here:
>> http://sourceforge.net/project/showfiles.php?group_id=23617&package_id=22822
>
> To the best of my knowledge, any bison version >= 1.875 works
> fine; you need not insist on finding exactly those two releases.

On my machine, using GnuWin32's Bison 2.1 resulted in compile errors
while cygwin's 2.2 is fine. Have seen the details being discussed
on hackers some time ago.

Thanks,
Martin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers