Sunday, June 1, 2008

Re: [PATCHES] Feature: give pg_dump a WHERE clause expression

daveg wrote:

>> ISTM this would be better off waiting until we turn large parts of
>> pg_dump into a library, as has been often discussed, at which point it
>> should be relatively simple to write a custom client to do what the OP
>> wants. I agree that it does not at all belong in pg_dump.
>>
>
> I can't imagine many of my clients ever writing another C program or even
> being willing to pay me to do so. While modularizing pg_dump is a fine idea,
> I don't think it addresses the same set of use cases and users as this
> proposal.
>
>
>

It's not clear to me that your use case is very compelling. Does your
foreign database not support import via CSV or XML? Postgres can now
produce both of these for any arbitrary query.

cheers

andrew

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [GENERAL] Multiple result sets

> >> I need that 5 queries, fired from the same ajax request to a web
> >> python application, see the same database snapshot. The driver is
> >> psycopg2.
> >>
> >> Since postgresql 8.2 functions can't return multiple result sets
> >> what would be the best aproach?
> >>
> > You want to set your transaction isolation to "Serializable".
> > Then execute your 5 queries via the same connection, and the same
> > Transaction.
> >
> > You can do that with this command:
> >
> > BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
>
> I'm not sure i got it. You mean like this?:
>
> import psycopg2 as db
> dsn = 'host=localhost dbname=dbname user=user password=passwd'
> connection = db.connect(dsn)
> cursor = connection.cursor()
>
> cursor.execute('BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;');
> rs1 = cursor.execute(query_1, (param1,))
> rs2 = cursor.execute(query_2, (param2,))
> cursor.execute('commit;');
>
> cursor.close()
> connection.close()
>
> I tested it and it raises no exception. I just don't understand if a
> transaction persists between execute() calls.
>

I am not familiar with the python library, but that looks correct to me.
You can always test it by adding a sleep between your two queries and
modifying the database from a console connection during the sleep.

Note that I'm assuming your 5 queries are all read-only selects.
If you're modifying data during your queries, and another concurrent
database connection modifies the same data during your transaction,
the later modifications will fail under serializable isolation.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PATCHES] Feature: give pg_dump a WHERE clause expression

On Sun, 2008-06-01 at 15:47 -0400, Tom Lane wrote:
> Davy Durham <pubaddr5@davyandbeth.com> writes:
> > So, if this patch is not acceptable as-is, what would you feel about
> > this:
> > I could enhance the -t/--table=NAME option to accept more than a
> > simple NAME. Rather it could accept something in the form:
>
> > --table=<table_name>:<where-clause expression>
>
> Well, that would at least address the complaint that it doesn't scale
> to multiple tables, but the whole thing still seems like a frammish
> that will never see enough use to justify maintaining it.
>

The code is not all that much to maintain as it is, and making it an
addition to an existing parameter instead of a new one may not increase
the code size by much more at all.

BTW- I looked it up, and mysqldump supports such an option as mine, but
it too is global for all tables and cannot be specified per table
AFAICT.

> (BTW, what will you do with a table whose name contains a colon?)
>

I thought about that, but didn't know if tables could contain a colon or
not, but I see that this is possible by enclosing the table name in
double-quotes. I suppose they could escape the colon as I believe they
may have to do if a table contains '*', or '?' ?? Is there another
character that is more appropriate?


Another option I just thought about was to leave the -w/--where flag in
place, but it applies to all subsequent -t/--table flags..

So you could do something like:
pg_dump -w 'expr1' -t tab1 -t tab2 -w 'expr2' -t tab3

So that the expr1 filters tab1 and tab2, and expr2 filters tab3..

This should be a simple modification to the existing patch to make the
where clause tracked per table rather than merely a global variable.

However there the becomes an importance on the order that args are given
to pg_dump which you may object to. But then again, if order of the
tables in the dump file matters, then the -t/--tables flags already
indicate what order the tables will be dumped. (By pointing this out, I
mean there may already be an importance of argument order in some
circumstances)

This also solves the ':' syntax problem you mentioned above.


> regards, tom lane
>


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] Feature: give pg_dump a WHERE clause expression

On Sun, Jun 01, 2008 at 04:13:34PM -0400, Andrew Dunstan wrote:
>
>
> Tom Lane wrote:
> >Davy Durham <pubaddr5@davyandbeth.com> writes:
> >
> >>So, if this patch is not acceptable as-is, what would you feel about
> >>this:
> >> I could enhance the -t/--table=NAME option to accept more than a
> >> simple NAME. Rather it could accept something in the form:
> >>
> >
> >
> >> --table=<table_name>:<where-clause expression>
> >>
> >
> >Well, that would at least address the complaint that it doesn't scale
> >to multiple tables, but the whole thing still seems like a frammish
> >that will never see enough use to justify maintaining it.
> >
> >(BTW, what will you do with a table whose name contains a colon?)
> >
> >
> >
>
> ISTM this would be better off waiting until we turn large parts of
> pg_dump into a library, as has been often discussed, at which point it
> should be relatively simple to write a custom client to do what the OP
> wants. I agree that it does not at all belong in pg_dump.

I can't imagine many of my clients ever writing another C program or even
being willing to pay me to do so. While modularizing pg_dump is a fine idea,
I don't think it addresses the same set of use cases and users as this
proposal.

-dg

--
David Gould daveg@sonic.net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Core team statement on replication in PostgreSQL

Aidan Van Dyk wrote:
> The whole single-threaded WAL replay problem is going to rear it's ugly
> head here too, and mean that a slave *won't* be able to keep up with a
> busy master if it's actually trying to apply all the changes in
> real-time.
Is there a reason to commit at the same points that the master
committed? Wouldn't relaxing
that mean that at least you would get 'big' commits and some economy of
scale? It might
not be too bad. All I can say is that Sybase warm standby is useful,
even though the rep
for an update that changes a hundred rows is a hundred updates keyed on
primary key,
which is pretty sucky in terms of T-SQL performance.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] Feature: give pg_dump a WHERE clause expression

Tom Lane wrote:
> Davy Durham <pubaddr5@davyandbeth.com> writes:
>
>> So, if this patch is not acceptable as-is, what would you feel about
>> this:
>> I could enhance the -t/--table=NAME option to accept more than a
>> simple NAME. Rather it could accept something in the form:
>>
>
>
>> --table=<table_name>:<where-clause expression>
>>
>
> Well, that would at least address the complaint that it doesn't scale
> to multiple tables, but the whole thing still seems like a frammish
> that will never see enough use to justify maintaining it.
>
> (BTW, what will you do with a table whose name contains a colon?)
>
>
>

ISTM this would be better off waiting until we turn large parts of
pg_dump into a library, as has been often discussed, at which point it
should be relatively simple to write a custom client to do what the OP
wants. I agree that it does not at all belong in pg_dump.

cheers

andrew

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [GENERAL] Multiple result sets

2008/6/1 Adam Rich <adam.r@sbcglobal.net>:
>
>> I need that 5 queries, fired from the same ajax request to a web
>> python application, see the same database snapshot. The driver is
>> psycopg2.
>>
>> Since postgresql 8.2 functions can't return multiple result sets what
>> would be the best aproach?
>>
>
> You want to set your transaction isolation to "Serializable".
> Then execute your 5 queries via the same connection, and the same
> Transaction.
>
> You can do that with this command:
>
> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;

I'm not sure i got it. You mean like this?:

import psycopg2 as db
dsn = 'host=localhost dbname=dbname user=user password=passwd'
connection = db.connect(dsn)
cursor = connection.cursor()

cursor.execute('BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;');
rs1 = cursor.execute(query_1, (param1,))
rs2 = cursor.execute(query_2, (param2,))
cursor.execute('commit;');

cursor.close()
connection.close()

I tested it and it raises no exception. I just don't understand if a
transaction persists between execute() calls.

Regards, Clodoaldo

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Core team statement on replication in PostgreSQL

David Fetter wrote:
> This part is a deal-killer. It's a giant up-hill slog to sell warm
> standby to those in charge of making resources available because the
> warm standby machine consumes SA time, bandwidth, power, rack space,
> etc., but provides no tangible benefit, and this feature would have
> exactly the same problem.
>
> IMHO, without the ability to do read-only queries on slaves, it's not
> worth doing this feature at all.
>
That's not something that squares with my experience *at all*, which
admitedly is entirely in
investment banks. Business continuity is king, and in some places the
warm standby rep
from the database vendor is trusted more than block-level rep from the
SAN vendor
(though that may be changing to some extent in favour of the SAN).

James


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] replication hooks

Marko Kreen wrote:
> There is this tiny matter of replicating schema changes asynchronously,
> but I suspect nobody actually cares. Few random points about that:
>
I'm not sure I follow you - the Sybase 'warm standby' replication of
everything is really
useful for business continuity. The per-table rep is more effective for
publishing reference
data, but is painful to maintain.

Not having something that automagically reps a complete copy including
DDL (except
for temp tables) is a major weakness IMO.

James


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] Feature: give pg_dump a WHERE clause expression

Davy Durham <pubaddr5@davyandbeth.com> writes:
> So, if this patch is not acceptable as-is, what would you feel about
> this:
> I could enhance the -t/--table=NAME option to accept more than a
> simple NAME. Rather it could accept something in the form:

> --table=<table_name>:<where-clause expression>

Well, that would at least address the complaint that it doesn't scale
to multiple tables, but the whole thing still seems like a frammish
that will never see enough use to justify maintaining it.

(BTW, what will you do with a table whose name contains a colon?)

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] [BUGS] BUG #4203: perform dblink() in begin/exception returns wrong SQLSTATE code

Joe Conway <mail@joeconway.com> writes:
> Here is my proposed patch -- as suggested for cvs tip only.

A few comments:

Don't use errstart/errfinish directly. A reasonable way to deal with
the type of situation you have here is

ereport(ERROR,
(errcode(...),
errmsg(...),
det_msg ? errdetail("%s", det_msg) : 0,
hint_msg ? errhint("%s", hint_msg) : 0,
...));

You can't expect the result of PQresultErrorField to still be valid
after you've PQclear'd the PGresult. I think you'll need to pstrdup
the non-null results first. Or maybe use a PG_TRY block to free the
PGresult on the way out after the error escape ... but pstrdup is
probably cleaner.

This code doesn't cope with the possibility that no SQLSTATE
is available (a distinct possibility for libpq-detected errors).
You'll need to use some generic error code in that case. I'm tempted
to suggest ERRCODE_CONNECTION_FAILURE, on the assumption that if it's
libpq-detected then it's a connection problem.

It would probably be useful to show the name of the dblink connection
in the context.

I'm thinking that you are getting well past what is reasonable to
put in a macro. Time to use an out-of-line function.

Don't use "unable to..." --- this is against the message style guide.
"could not" is approved style. Also note the expectation that context
entries be complete sentences.

> I haven't been around enough lately to be sure I understand the process
> these days. Should I be posting this to the wiki and waiting for the
> next commit fest, or just apply myself in a day or two assuming no
> objections?

No, you can apply it yourself when you feel it's ready. The wiki queue
is just to keep track of stuff that is submitted by non-committers or
that a committer wants extra review of.

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

[pgsql-advocacy] BoF at OSCON 2008: Wednesday, July 23, 7:30pm

BoF was just accepted and is scheduled for 7:30pm on Wednesday, June 23.

-selena

--
Selena Deckelmann
United States PostgreSQL Association - http://www.postgresql.us
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [PATCHES] Feature: give pg_dump a WHERE clause expression

On Sun, 2008-06-01 at 10:43 -0400, Tom Lane wrote:
> Davy Durham <pubaddr5@davyandbeth.com> writes:
> > I have added a new parameter, -w/--where=EXPR
> > This lets you specify an expression that will be used in a WHERE
> > clause when the data is dumped.
>
> This seems pretty poorly thought out. It can hardly work in a dump
> of more than one table, which means that there's not any real reason
> to use pg_dump at all. Just do a COPY (SELECT ...) TO somefile.
>
> regards, tom lane
>

Well, my primary reason for writing the patch was to have a standard SQL
file using INSERT statements in order to load the some of a table's data
into a database other than postgresql which does not support the COPY
statement.

I'll admit that the single where clause would often not be applicable
across all tables in a database, but when pg_dump is told specific
tables to dump (a nice existing feature of pg_dump for doing something
specialized other than a simple entire database backup), then it can be
useful.

My particular case is that I have several tables that are simple event
logs. Each table has a timestamp column. I'm periodically bringing
these tables into sync on another database and I only want to pull rows
newer than since the last sync.. So, a where-clause of..
'ts > $last_sync'
..works for me. However, I'm sure there are other uses too..


== Thinking Further ==

Beyond serving my own needs, I'm trying to generically extend the
general idea that pg_dump already supports:
1) pg_dump can be made to dump an entire database
2) pg_dump can be made to dump only requested tables
3) [my addition] pg_dump can be made to dump only requested rows
from requested tables

However, it's no SO generic in that the where clause applies to all
tables.

So, if this patch is not acceptable as-is, what would you feel about
this:
I could enhance the -t/--table=NAME option to accept more than a
simple NAME. Rather it could accept something in the form:

--table=<table_name>:<where-clause expression>

For example, pg_dump --table='foo:col1 > 10 AND f2 < 14'

Currently, the user can specify -t/--table multiple times to
have more than one table dumped. Or the user can use a pattern
to a single -t option to request multiple tabes.

This way, a user could specify a WHERE clause per table he has
requested to dump. Granted, the WHERE clause may then apply to
multiple tables if a pattern was used, but that may very well be
desirable to the user.

Unless you disagree, this is a more generic solution (than what my patch
contains) to allowing the user of pg_dump to further refine what they
wish to dump.


Thoughts?

Thanks for the feedback


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Where can I find the doxyfile?

Xin Wang napsal(a):
> Hi,
> I don't know where I can find the doxyfile which generate
> "doxygen.postgresql.org" web site. I found that when reading code the
> doxygen source code is quite helpful. However, I want to generate an
> off-line copy of doxygen docs myself, but I can't find the doxyfile in
> the lastest source release.

I think it is good idea. Stefan, what's about put it on the wiki?

Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[PATCHES] Re: [BUGS] BUG #4203: perform dblink() in begin/exception returns wrong SQLSTATE code

Index: dblink.c
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/dblink.c,v
retrieving revision 1.73
diff -c -r1.73 dblink.c
*** dblink.c 4 Apr 2008 17:02:56 -0000 1.73
--- dblink.c 1 Jun 2008 18:50:04 -0000
***************
*** 135,158 ****

#define DBLINK_RES_ERROR(p2) \
do { \
! msg = pstrdup(PQerrorMessage(conn)); \
if (res) \
PQclear(res); \
! ereport(ERROR, \
! (errcode(ERRCODE_SYNTAX_ERROR), \
! errmsg("%s", p2), \
! errdetail("%s", msg))); \
} while (0)

#define DBLINK_RES_ERROR_AS_NOTICE(p2) \
do { \
! msg = pstrdup(PQerrorMessage(conn)); \
if (res) \
PQclear(res); \
! ereport(NOTICE, \
! (errcode(ERRCODE_SYNTAX_ERROR), \
! errmsg("%s", p2), \
! errdetail("%s", msg))); \
} while (0)

#define DBLINK_CONN_NOT_AVAIL \
--- 135,182 ----

#define DBLINK_RES_ERROR(p2) \
do { \
! char *sqlstate = PQresultErrorField(res, PG_DIAG_SQLSTATE); \
! char *pg_diag_message_primary = PQresultErrorField(res, PG_DIAG_MESSAGE_PRIMARY); \
! char *pg_diag_message_detail = PQresultErrorField(res, PG_DIAG_MESSAGE_DETAIL); \
! char *pg_diag_message_hint = PQresultErrorField(res, PG_DIAG_MESSAGE_HINT); \
! char *pg_diag_context = PQresultErrorField(res, PG_DIAG_CONTEXT); \
if (res) \
PQclear(res); \
! errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO); \
! errcode(MAKE_SQLSTATE(sqlstate[0],sqlstate[1],sqlstate[2],sqlstate[3],sqlstate[4])); \
! errmsg("%s", pg_diag_message_primary); \
! if (pg_diag_message_detail) \
! errdetail("%s", pg_diag_message_detail); \
! if (pg_diag_message_hint) \
! errhint("%s", pg_diag_message_hint); \
! if (pg_diag_context) \
! errcontext("%s", pg_diag_context); \
! if (p2) \
! errcontext("error on dblink connection: %s", p2); \
! errfinish(0); \
} while (0)

#define DBLINK_RES_ERROR_AS_NOTICE(p2) \
do { \
! char *sqlstate = PQresultErrorField(res, PG_DIAG_SQLSTATE); \
! char *pg_diag_message_primary = PQresultErrorField(res, PG_DIAG_MESSAGE_PRIMARY); \
! char *pg_diag_message_detail = PQresultErrorField(res, PG_DIAG_MESSAGE_DETAIL); \
! char *pg_diag_message_hint = PQresultErrorField(res, PG_DIAG_MESSAGE_HINT); \
! char *pg_diag_context = PQresultErrorField(res, PG_DIAG_CONTEXT); \
if (res) \
PQclear(res); \
! errstart(NOTICE, __FILE__, __LINE__, PG_FUNCNAME_MACRO); \
! errcode(MAKE_SQLSTATE(sqlstate[0],sqlstate[1],sqlstate[2],sqlstate[3],sqlstate[4])); \
! errmsg("%s", pg_diag_message_primary); \
! if (pg_diag_message_detail) \
! errdetail("%s", pg_diag_message_detail); \
! if (pg_diag_message_hint) \
! errhint("%s", pg_diag_message_hint); \
! if (pg_diag_context) \
! errcontext("%s", pg_diag_context); \
! if (p2) \
! errcontext("error on dblink connection: %s", p2); \
! errfinish(0); \
} while (0)

#define DBLINK_CONN_NOT_AVAIL \
***************
*** 397,406 ****
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
if (fail)
! DBLINK_RES_ERROR("sql error");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("sql error");
PG_RETURN_TEXT_P(cstring_to_text("ERROR"));
}
}
--- 421,430 ----
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
if (fail)
! DBLINK_RES_ERROR("unable to open cursor");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("unable to open cursor");
PG_RETURN_TEXT_P(cstring_to_text("ERROR"));
}
}
***************
*** 471,480 ****
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
if (fail)
! DBLINK_RES_ERROR("sql error");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("sql error");
PG_RETURN_TEXT_P(cstring_to_text("ERROR"));
}
}
--- 495,504 ----
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
if (fail)
! DBLINK_RES_ERROR("unable to close cursor");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("unable to close cursor");
PG_RETURN_TEXT_P(cstring_to_text("ERROR"));
}
}
***************
*** 513,519 ****
int call_cntr;
int max_calls;
AttInMetadata *attinmeta;
- char *msg;
PGresult *res = NULL;
MemoryContext oldcontext;
char *conname = NULL;
--- 537,542 ----
***************
*** 591,600 ****
PQresultStatus(res) != PGRES_TUPLES_OK))
{
if (fail)
! DBLINK_RES_ERROR("sql error");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("sql error");
SRF_RETURN_DONE(funcctx);
}
}
--- 614,623 ----
PQresultStatus(res) != PGRES_TUPLES_OK))
{
if (fail)
! DBLINK_RES_ERROR("unable to fetch from cursor");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("unable to fetch from cursor");
SRF_RETURN_DONE(funcctx);
}
}
***************
*** 847,856 ****
PQresultStatus(res) != PGRES_TUPLES_OK))
{
if (fail)
! DBLINK_RES_ERROR("sql error");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("sql error");
if (freeconn)
PQfinish(conn);
SRF_RETURN_DONE(funcctx);
--- 870,879 ----
PQresultStatus(res) != PGRES_TUPLES_OK))
{
if (fail)
! DBLINK_RES_ERROR("unable to execute query");
else
{
! DBLINK_RES_ERROR_AS_NOTICE("unable to execute query");
if (freeconn)
PQfinish(conn);
SRF_RETURN_DONE(funcctx);
***************
*** 1181,1189 ****
PQresultStatus(res) != PGRES_TUPLES_OK))
{
if (fail)
! DBLINK_RES_ERROR("sql error");
else
! DBLINK_RES_ERROR_AS_NOTICE("sql error");

/* need a tuple descriptor representing one TEXT column */
tupdesc = CreateTemplateTupleDesc(1, false);
--- 1204,1212 ----
PQresultStatus(res) != PGRES_TUPLES_OK))
{
if (fail)
! DBLINK_RES_ERROR("unable to execute command");
else
! DBLINK_RES_ERROR_AS_NOTICE("unable to execute command");

/* need a tuple descriptor representing one TEXT column */
tupdesc = CreateTemplateTupleDesc(1, false);
Index: expected/dblink.out
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/expected/dblink.out,v
retrieving revision 1.23
diff -c -r1.23 dblink.out
*** expected/dblink.out 6 Apr 2008 16:54:48 -0000 1.23
--- expected/dblink.out 1 Jun 2008 18:53:06 -0000
***************
*** 125,133 ****

-- open a cursor with bad SQL and fail_on_error set to false
SELECT dblink_open('rmt_foo_cursor','SELECT * FROM foobar',false);
! NOTICE: sql error
! DETAIL: ERROR: relation "foobar" does not exist
!
dblink_open
-------------
ERROR
--- 125,132 ----

-- open a cursor with bad SQL and fail_on_error set to false
SELECT dblink_open('rmt_foo_cursor','SELECT * FROM foobar',false);
! NOTICE: relation "foobar" does not exist
! CONTEXT: error on dblink connection: unable to open cursor
dblink_open
-------------
ERROR
***************
*** 194,202 ****
-- intentionally botch a fetch
SELECT *
FROM dblink_fetch('rmt_foobar_cursor',4,false) AS t(a int, b text, c text[]);
! NOTICE: sql error
! DETAIL: ERROR: cursor "rmt_foobar_cursor" does not exist
!
a | b | c
---+---+---
(0 rows)
--- 193,200 ----
-- intentionally botch a fetch
SELECT *
FROM dblink_fetch('rmt_foobar_cursor',4,false) AS t(a int, b text, c text[]);
! NOTICE: cursor "rmt_foobar_cursor" does not exist
! CONTEXT: error on dblink connection: unable to fetch from cursor
a | b | c
---+---+---
(0 rows)
***************
*** 210,218 ****

-- close the wrong cursor
SELECT dblink_close('rmt_foobar_cursor',false);
! NOTICE: sql error
! DETAIL: ERROR: cursor "rmt_foobar_cursor" does not exist
!
dblink_close
--------------
ERROR
--- 208,215 ----

-- close the wrong cursor
SELECT dblink_close('rmt_foobar_cursor',false);
! NOTICE: cursor "rmt_foobar_cursor" does not exist
! CONTEXT: error on dblink connection: unable to close cursor
dblink_close
--------------
ERROR
***************
*** 221,235 ****
-- should generate 'cursor "rmt_foo_cursor" not found' error
SELECT *
FROM dblink_fetch('rmt_foo_cursor',4) AS t(a int, b text, c text[]);
! ERROR: sql error
! DETAIL: ERROR: cursor "rmt_foo_cursor" does not exist
!
-- this time, 'cursor "rmt_foo_cursor" not found' as a notice
SELECT *
FROM dblink_fetch('rmt_foo_cursor',4,false) AS t(a int, b text, c text[]);
! NOTICE: sql error
! DETAIL: ERROR: cursor "rmt_foo_cursor" does not exist
!
a | b | c
---+---+---
(0 rows)
--- 218,230 ----
-- should generate 'cursor "rmt_foo_cursor" not found' error
SELECT *
FROM dblink_fetch('rmt_foo_cursor',4) AS t(a int, b text, c text[]);
! ERROR: cursor "rmt_foo_cursor" does not exist
! CONTEXT: error on dblink connection: unable to fetch from cursor
-- this time, 'cursor "rmt_foo_cursor" not found' as a notice
SELECT *
FROM dblink_fetch('rmt_foo_cursor',4,false) AS t(a int, b text, c text[]);
! NOTICE: cursor "rmt_foo_cursor" does not exist
! CONTEXT: error on dblink connection: unable to fetch from cursor
a | b | c
---+---+---
(0 rows)
***************
*** 291,299 ****
-- bad remote select
SELECT *
FROM dblink('SELECT * FROM foobar',false) AS t(a int, b text, c text[]);
! NOTICE: sql error
! DETAIL: ERROR: relation "foobar" does not exist
!
a | b | c
---+---+---
(0 rows)
--- 286,293 ----
-- bad remote select
SELECT *
FROM dblink('SELECT * FROM foobar',false) AS t(a int, b text, c text[]);
! NOTICE: relation "foobar" does not exist
! CONTEXT: error on dblink connection: unable to execute query
a | b | c
---+---+---
(0 rows)
***************
*** 316,324 ****

-- botch a change to some other data
SELECT dblink_exec('UPDATE foobar SET f3[2] = ''b99'' WHERE f1 = 11',false);
! NOTICE: sql error
! DETAIL: ERROR: relation "foobar" does not exist
!
dblink_exec
-------------
ERROR
--- 310,317 ----

-- botch a change to some other data
SELECT dblink_exec('UPDATE foobar SET f3[2] = ''b99'' WHERE f1 = 11',false);
! NOTICE: relation "foobar" does not exist
! CONTEXT: error on dblink connection: unable to execute command
dblink_exec
-------------
ERROR
***************
*** 378,386 ****
SELECT *
FROM dblink('myconn','SELECT * FROM foobar',false) AS t(a int, b text, c text[])
WHERE t.a > 7;
! NOTICE: sql error
! DETAIL: ERROR: relation "foobar" does not exist
!
a | b | c
---+---+---
(0 rows)
--- 371,378 ----
SELECT *
FROM dblink('myconn','SELECT * FROM foobar',false) AS t(a int, b text, c text[])
WHERE t.a > 7;
! NOTICE: relation "foobar" does not exist
! CONTEXT: error on dblink connection: unable to execute query
a | b | c
---+---+---
(0 rows)
***************
*** 416,424 ****

-- open a cursor incorrectly
SELECT dblink_open('myconn','rmt_foo_cursor','SELECT * FROM foobar',false);
! NOTICE: sql error
! DETAIL: ERROR: relation "foobar" does not exist
!
dblink_open
-------------
ERROR
--- 408,415 ----

-- open a cursor incorrectly
SELECT dblink_open('myconn','rmt_foo_cursor','SELECT * FROM foobar',false);
! NOTICE: relation "foobar" does not exist
! CONTEXT: error on dblink connection: unable to open cursor
dblink_open
-------------
ERROR
***************
*** 503,511 ****

-- this should fail because there is no open transaction
SELECT dblink_exec('myconn','DECLARE xact_test CURSOR FOR SELECT * FROM foo');
! ERROR: sql error
! DETAIL: ERROR: DECLARE CURSOR can only be used in transaction blocks
!
-- reset remote transaction state
SELECT dblink_exec('myconn','ABORT');
dblink_exec
--- 494,501 ----

-- this should fail because there is no open transaction
SELECT dblink_exec('myconn','DECLARE xact_test CURSOR FOR SELECT * FROM foo');
! ERROR: DECLARE CURSOR can only be used in transaction blocks
! CONTEXT: error on dblink connection: unable to execute command
-- reset remote transaction state
SELECT dblink_exec('myconn','ABORT');
dblink_exec
***************
*** 554,562 ****
-- fetch some data incorrectly
SELECT *
FROM dblink_fetch('myconn','rmt_foobar_cursor',4,false) AS t(a int, b text, c text[]);
! NOTICE: sql error
! DETAIL: ERROR: cursor "rmt_foobar_cursor" does not exist
!
a | b | c
---+---+---
(0 rows)
--- 544,551 ----
-- fetch some data incorrectly
SELECT *
FROM dblink_fetch('myconn','rmt_foobar_cursor',4,false) AS t(a int, b text, c text[]);
! NOTICE: cursor "rmt_foobar_cursor" does not exist
! CONTEXT: error on dblink connection: unable to fetch from cursor
a | b | c
---+---+---
(0 rows)
***************
*** 571,579 ****
-- should generate 'cursor "rmt_foo_cursor" not found' error
SELECT *
FROM dblink_fetch('myconn','rmt_foo_cursor',4) AS t(a int, b text, c text[]);
! ERROR: sql error
! DETAIL: ERROR: cursor "rmt_foo_cursor" does not exist
!
-- close the named persistent connection
SELECT dblink_disconnect('myconn');
dblink_disconnect
--- 560,567 ----
-- should generate 'cursor "rmt_foo_cursor" not found' error
SELECT *
FROM dblink_fetch('myconn','rmt_foo_cursor',4) AS t(a int, b text, c text[]);
! ERROR: cursor "rmt_foo_cursor" does not exist
! CONTEXT: error on dblink connection: unable to fetch from cursor
-- close the named persistent connection
SELECT dblink_disconnect('myconn');
dblink_disconnect
Tom Lane wrote:
> Joe Conway <mail@joeconway.com> writes:
>> Tom Lane wrote:
>>> Yeah, the dblink code should probably try a bit harder to propagate the
>>> original error fields.
>
>> Sounds reasonable. Do you think this is a bug fix or an 8.4 enhancement?
>
> 8.4 enhancement I think, since a behavioral change here could pose a
> compatibility issue for applications.
>

Here is my proposed patch -- as suggested for cvs tip only.

I haven't been around enough lately to be sure I understand the process
these days. Should I be posting this to the wiki and waiting for the
next commit fest, or just apply myself in a day or two assuming no
objections?

Thanks,

Joe

Re: [PATCHES] partial header cleanup

Alvaro Herrera napsal(a):
> Zdenek Kotala wrote:
>> This replace xlog.h with xlogdefs.h in bufpage.h. All other changes are
>> forgotten include somewhere. It reduce e.g. bloat to half in itup.h. But,
>> There are still unresolved problems. htup should include bufpage.h,
>> because it needs PageHeader size, but there is still unnecessary bufmgr.h
>> include in bufpage which generates bloat.
>
> I agree with this patch -- in fact I had done the same before PGCon and
> then neglected it for some reason. (I think I was distracted trying to
> get the struct RelationData definition out of rel.h, but that did not
> turn out too well).

I think rel.h is atomic. There is no space to split it.

> I was thinking maybe we need a third buffer manager header file. One
> would have the current bufmgr.h, another would have the page stuff that
> does not know about bufmgr.h (so most of current bufpage.h), and the
> third one would be both plus the #define that needs both (which is
> currently in bufpage.h). I am not sure what kind of fallout that
> causes. Maybe that would help you too. We need to come up with a good
> name for that file however ... bufmgrpage.h seems ugly.

I don't think that we need third header file. It seems to me, that only macros
BufferGetPage requires both headers (Page and Buffer datatype). Other content
seems to me independent.

I'm thinking about renaming bufpage.h to page.h, but it is cosmetic change and
IIRC CVS does not like file renaming.

Zdenek


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] explain doesn't work with execute using

2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
> "Pavel Stehule" <pavel.stehule@gmail.com> writes:
>> 2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
>>> This argument seems entirely bogus. How are they any more constant
>>> than in the other case? The value isn't going to change for the life
>>> of the portal in either case.
>
>> this is true Tom, but problem is in EXPLAIN. I thing, so my and your
>> solution are little bit incorect. We solve result, not reason. We have
>> problem, bacause plan doesn't carry parameter's flags, and with
>> EXPLAIN planner is called two times with different param's flags.
>
> [ shrug... ] Well, I'm willing to change the code as you suggest,
> but if you're thinking that this will make EXPLAIN exactly reproduce
> the plan that would be generated for a plain SELECT invoked in the
> same context, you're still mistaken. It doesn't account for the
> effects of the fast-start-cursor option. And for what you seem to
> want EXPLAIN to do here, it probably shouldn't. The whole thing
> seems pretty unprincipled to me ...
>

It's not best, and it's surprise for me, so EXPLAIN can be different
then real plan. It's basic tool for identification of plpgsql
procedure's performance problems. So this can be short fix and point
for ToDo?

Regards
Pavel Stehule

> regards, tom lane
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal: table functions and plpgsql

Hello

After some days I thing, so idea of local types is wrong. Maybe we can
register output types for or SRF functions (maybe only for table
functions), but this mechanism is redundant to explicit custom types.
Local functions types are nice, they allows better compile time check,
but they are unnecessary.

Sample:
CREATE OR REPLACE FUNCTION foo(a integer)
RETURNS TABLE(a integer, b integer) AS $$
DECLARE r record;
BEGIN
FOR i IN 1..a LOOP
r := ROW(i, i+1);
RETURN NEXT r;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;

or
-- more in SQL/PSM character
CREATE OR REPLACE FUNCTION foo(a integer)
RETURNS TABLE(a integer, b integer) AS $$
BEGIN
RETURN TABLE SELECT i, i+1
FROM generate_series(1,a) g(i);
RETURN;
END;
$$ LANGUAGE plpgsql;

any comments??

Regards
Pavel Stehule
2008/5/21 Pavel Stehule <pavel.stehule@gmail.com>:
> Hello
>
> I am returning back to my patch and older proposal
> http://archives.postgresql.org/pgsql-hackers/2007-02/msg00318.php .
>
> Some work did Neil Conway
> http://archives.postgresql.org/pgsql-hackers/2007-07/msg00501.php and
> he commited half of this patch - RETURN QUERY part.
>
> Problematic part of my patch is implementation. Tom Lane proposal
> implenation RETURNS TABLE only as syntactic sugar for RETURNS SETOF
> RECORD. This is not comaptible with potential implementation, because
> it adds some default variables. My solution was special argmode, so I
> was able don't create default variables for output. My solution wasn't
> best too. It was ugly for current plpgsql where is often used RETURN
> NEXT statement (PSM doesn't know similar statement). I unlike default
> variables - it simply way to variables and column names collision.
>
> I propose following syntax for plpgsql:
>
> CREATE OR REPLACE FUNCTION foo(m integer)
> RETURNS TABLE (a integer, b integer) AS $$
> DECLARE r foo; -- same name as function, this type has local visibility
> BEGIN
> FOR i IN 1..m LOOP
> r.a := i; r.b := i + 1;
> RETURN NEXT r;
> END LOOP;
> RETURN;
> END;
> $$ LANGUAGE plpgsql;
>
> In my proposal I don't create any default variables. Result type is
> only virtual - I don't need write it to system directory. I thing it's
> better than using some specific predeclared type as RESULTTYPE OR
> RESULTSET.
>
> What do you thing about?
>
> Regards
> Pavel Stehule
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Multiple result sets

> I need that 5 queries, fired from the same ajax request to a web
> python application, see the same database snapshot. The driver is
> psycopg2.
>
> Since postgresql 8.2 functions can't return multiple result sets what
> would be the best aproach?
>

You want to set your transaction isolation to "Serializable".
Then execute your 5 queries via the same connection, and the same
Transaction.

You can do that with this command:

BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Multiple result sets

Clodoaldo <clodoaldo.pinto.neto@gmail.com> writes:
> I need that 5 queries, fired from the same ajax request to a web
> python application, see the same database snapshot.

serializable transaction?

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Multiple result sets

I need that 5 queries, fired from the same ajax request to a web
python application, see the same database snapshot. The driver is
psycopg2.

Since postgresql 8.2 functions can't return multiple result sets what
would be the best aproach?

All 5 queries return 2 columns but they are different types in each
query. I could just do a union casting everything to text, adding one
column identifying each query and then recast when receiving the
result set. But that could be costly (i guess the union is) and not
elegant at all. Those queries will fill select boxes and the ajax
refresh should be very fast.

Regards, Clodoaldo Pinto Neto

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] tipo de dato money, dejó de ser obsoleto?

On Sun, Jun 1, 2008 at 10:47 AM, Cesar A. <carbonararivas@yahoo.es> wrote:
> Saludos...
>
> Navegando por la documentación[1], me encontré que para el ti de dato money
> en la versión 8.3 ya no etsá la nota que dice que ese tipo está obsoleto...
> ¿fue reescrito?
>

Las Release Notes[1] dicen:

* Widen the MONEY data type to 64 bits (D'Arcy Cain)
This greatly increases the range of supported MONEY values.

A partir de ese momento se le quito la marca de obsoleto.


[1] http://www.postgresql.org/docs/8.3/static/release-8-3.html

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 087171157
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?

http://archives.postgresql.org/pgsql-es-ayuda

Re: [HACKERS] Core team statement on replication in PostgreSQL

On Thu, May 29, 2008 at 4:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The Postgres core team met at PGCon to discuss a few issues, the largest
> of which is the need for simple, built-in replication for PostgreSQL.
[...]
> We believe that the most appropriate base technology for this is
1> probably real-time WAL log shipping, as was demoed by NTT OSS at PGCon.
> We hope that such a feature can be completed for 8.4. Ideally this
> would be coupled with the ability to execute read-only queries on the
> slave servers, but we see technical difficulties that might prevent that
> from being completed before 8.5 or even further out. (The big problem
> is that long-running slave-side queries might still need tuples that are
> vacuumable on the master, and so replication of vacuuming actions would
> cause the slave's queries to deliver wrong answers.)
>
> Again, this will not replace Slony, pgPool, Continuent, Londiste, or
> other systems for many users, as it will be not be highly scalable nor
> support long-distance replication nor replicating less than an entire
> installation. But it is time to include a simple, reliable basic
> replication feature in the core system.

Hello!

I thought I would share a few thoughts of my own about the issue.
I have a hands-on experience with Oracle and MySQL apart from
PostgreSQL so I hope it will be a bit interesting.

The former has a feature called "physical standby", which looks
quite like our WAL-shipping based replication. Simply archived
logs are replayed on the standby database. A primary database
and standby database are connected, and can stream the logs
directly. They either copy the log when its finished (as we do now)
or can do it in coninuous manner (as I hope we will be able to).

It is possible to have a synchronous replication (where "COMMIT"
on primary database succeeds when the data is safely stored on
the standby database). I think such a feature would be a great
advantage for PostgreSQL (where you cannot afford to loose
any transactions).

Their standby database is not accessible. It can be opened read-only,
but during that time replication stops. So PostgreSQL having
read-only and still replicating standby database would be great.

The other method is "logical standby" which works by dissecting
WAL-logs and recreating DDLs/DMLs from it. Never seen anyone
use it. ;-)

Then we have a mysql replication -- done by replaying actual DDLs/DMLs
on the slaves. This approach has issues, most notably when slaves are
highly loaded and lag behind the master -- so you end up with infrastructure
to monitor lags and turn off slaves which lag too much. Also it is painful
to setup -- you have to stop, copy, configure and run.

* Back to PostgreSQL world

As for PostgreSQL solutions we have a slony-I, which is great as long as
you don't have too many people managing the database and/or your
schema doesn't change too frequently. Perhaps it would be maintainable
more easily if there would be to get DDLs (as DDL triggers or similar).
Its main advantages for me is ability to prepare complex setups and
easily add new slaves). The pgpool solution is quite nice but then
again adding a new slave is not so easy. And being a filtering
layer between client and server it feels a bit "fragile" (I know it is not,
but then again it is harder to convince someone that "yes it will work
100% right all the time").

* How I would like PostgreSQL WAL-replication to evolve:

First of all it would be great if a slave/standby would contact the master
and maintain the state with it (tell it its xmin, request a log to stream,
go online-streaming). Especially I hope that it should be possible
to make a switchover (where the two databases exchange roles),
and in this the direct connection between the two should help.

In detail, I think it should go like this:
* A slave database starts up, checks that it works as a "replica"
(hopefully it would not be a postgresql.conf constant, but rather
some file maintained by the database).
* It would connect to the master database, tell where in the WAL
it is now, and request a log N.
* If log N is not available, request a log from external supplied
script (so that it could be fetched from log archive repository
somewhere, recovered from a backup tape, etc).
* Continue asking, until we get to the logs which are available
at master database.
* Continue replaying until we get within max_allowed_replication_lag
time, and open our slave for read-only queries.
* If we start lagging too much perhaps close the read-only access
to the database (perhaps configurable?).

I think that replication should be easy to set up. I think our
archive_command is quite easy, but many a person come
with a lot of misconceptions how it works (and it takes time
to explain them how it actually work, especially what is
archive_command for, and that pg_start_backup() doesn't
actually _do_ backup, but just tells PostgreSQL that
backup is being done).

Easy to setup and easy to switchover (change the direction
of replication) and failover.

Also, it would be interesting to be able to have a shared-storage
solution where a single database would modify data and multiple
read-only slaves would access the data. The WAL-shipping would
be used then to "push" modified pages into read-only slaves' memory.
And each checkpoint would allow read-only slaves to release these
pages from memory. Kind of replication without actually doing any
writes.

* high available libpq?

Other thing worth to consider is perhaps adding a HA-capability
to libpq. Currently I use a setup where I have machine with
database and pg_bouncer (for connection pooling at database
level) and on client machines I have a pgpool on localhost
(for transparent failover). I think some level of replication
awareness could be built-in into libpq. It would allow to make
simpler, but still HA, setups.

Perhaps ability to specify a standby in connect string, something like:

"host=foo1.bar;db=pgdb;failover=yes;stbyhost=foo2.bar"

...a failover="yes" would allow libpq to try connecting to stbyhost
if host is not available AND succeed if stbyhost announces itself
as a new master.

This would make switchover easier (clients would be told, that
we are closing, and that they should connect to a new master).

Also some way of supplying multiple hosts, where client should
pick a host at random, would be quite useful, say:

"host=foo1.bar,foo2.bar,foo3.bar;db=pgdb;loadbalance=random"

Why in libpq and not in [language/framework/application]? Well
it would help with not reinventing the wheel. And if we want
a painless switchover, client should be aware that there is
a standby ready. Also, being optional it wouldn't change anything
for those who don't want it.

Regards,
Dawid

PS: As for pgpool/pg_bouncer -- I wish there would be a way for
them to tell the database that they are proxying for someone
else (so that I could check in pg_stat_activity who is on the far
end of the database connection, not "localhost" ;)).
--
Solving [site load issues] with [more database replication] is a lot
like solving your own personal problems with heroin - at first it
sorta works, but after a while things just get out of hand.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Update time zone data files to tzdata release 2008c (DST law

Log Message:
-----------
Update time zone data files to tzdata release 2008c (DST law changes in
Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis).

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pgsql/src/timezone/data:
africa (r1.7 -> r1.7.2.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/africa?r1=1.7&r2=1.7.2.1)
asia (r1.10 -> r1.10.2.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/asia?r1=1.10&r2=1.10.2.1)
backward (r1.6 -> r1.6.2.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/backward?r1=1.6&r2=1.6.2.1)
northamerica (r1.10 -> r1.10.2.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/northamerica?r1=1.10&r2=1.10.2.1)
southamerica (r1.9.2.1 -> r1.9.2.2)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/southamerica?r1=1.9.2.1&r2=1.9.2.2)
zone.tab (r1.9 -> r1.9.2.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/zone.tab?r1=1.9&r2=1.9.2.1)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: Update time zone data files to tzdata release 2008c (DST law

Log Message:
-----------
Update time zone data files to tzdata release 2008c (DST law changes in
Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis).

Tags:
----
REL8_1_STABLE

Modified Files:
--------------
pgsql/src/timezone/data:
africa (r1.2.2.4 -> r1.2.2.5)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/africa?r1=1.2.2.4&r2=1.2.2.5)
asia (r1.3.2.5 -> r1.3.2.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/asia?r1=1.3.2.5&r2=1.3.2.6)
backward (r1.3.2.3 -> r1.3.2.4)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/backward?r1=1.3.2.3&r2=1.3.2.4)
northamerica (r1.3.2.5 -> r1.3.2.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/northamerica?r1=1.3.2.5&r2=1.3.2.6)
southamerica (r1.3.2.5 -> r1.3.2.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/southamerica?r1=1.3.2.5&r2=1.3.2.6)
zone.tab (r1.3.2.4 -> r1.3.2.5)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/zone.tab?r1=1.3.2.4&r2=1.3.2.5)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: Update time zone data files to tzdata release 2008c (DST law

Log Message:
-----------
Update time zone data files to tzdata release 2008c (DST law changes in
Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis).

Tags:
----
REL8_0_STABLE

Modified Files:
--------------
pgsql/src/timezone/data:
africa (r1.1.4.5 -> r1.1.4.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/africa?r1=1.1.4.5&r2=1.1.4.6)
asia (r1.2.4.6 -> r1.2.4.7)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/asia?r1=1.2.4.6&r2=1.2.4.7)
backward (r1.2.4.4 -> r1.2.4.5)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/backward?r1=1.2.4.4&r2=1.2.4.5)
northamerica (r1.2.4.6 -> r1.2.4.7)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/northamerica?r1=1.2.4.6&r2=1.2.4.7)
southamerica (r1.2.4.6 -> r1.2.4.7)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/southamerica?r1=1.2.4.6&r2=1.2.4.7)
zone.tab (r1.2.4.5 -> r1.2.4.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/zone.tab?r1=1.2.4.5&r2=1.2.4.6)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: Update time zone data files to tzdata release 2008c (DST law

Log Message:
-----------
Update time zone data files to tzdata release 2008c (DST law changes in
Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis).

Tags:
----
REL8_2_STABLE

Modified Files:
--------------
pgsql/src/timezone/data:
africa (r1.4.2.3 -> r1.4.2.4)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/africa?r1=1.4.2.3&r2=1.4.2.4)
asia (r1.6.2.4 -> r1.6.2.5)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/asia?r1=1.6.2.4&r2=1.6.2.5)
backward (r1.4.2.2 -> r1.4.2.3)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/backward?r1=1.4.2.2&r2=1.4.2.3)
northamerica (r1.6.2.4 -> r1.6.2.5)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/northamerica?r1=1.6.2.4&r2=1.6.2.5)
southamerica (r1.6.2.4 -> r1.6.2.5)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/southamerica?r1=1.6.2.4&r2=1.6.2.5)
zone.tab (r1.6.2.3 -> r1.6.2.4)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/zone.tab?r1=1.6.2.3&r2=1.6.2.4)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: Update time zone data files to tzdata release 2008c (DST law

Log Message:
-----------
Update time zone data files to tzdata release 2008c (DST law changes in
Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis).

Modified Files:
--------------
pgsql/src/timezone/data:
africa (r1.7 -> r1.8)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/africa?r1=1.7&r2=1.8)
asia (r1.10 -> r1.11)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/asia?r1=1.10&r2=1.11)
backward (r1.6 -> r1.7)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/backward?r1=1.6&r2=1.7)
northamerica (r1.10 -> r1.11)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/northamerica?r1=1.10&r2=1.11)
southamerica (r1.10 -> r1.11)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/southamerica?r1=1.10&r2=1.11)
zone.tab (r1.9 -> r1.10)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/zone.tab?r1=1.9&r2=1.10)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Core team statement on replication in PostgreSQL

"Merlin Moncure" <mmoncure@gmail.com> writes:
> On Sun, Jun 1, 2008 at 11:58 AM, Robert Hodges
> <robert.hodges@continuent.com> wrote:
>> My point here is that with reasonably small extensions to the core you can
>> build products that are a lot better than SLONY.

> These issues are much discussed and well understood.

Well, what we know is that previous attempts to define replication hooks
to be added to the core have died for lack of interest. Maybe Robert
can start a new discussion that will actually get somewhere; if so, more
power to him. (Is the replica-hooks-discuss list still working?) But
that is entirely orthogonal to what is proposed in this thread, which
is to upgrade the existing PITR support into a reasonably useful
replication feature.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Refactor SPI_cursor_open/SPI_cursor_open_with_args so that the

Log Message:
-----------
Refactor SPI_cursor_open/SPI_cursor_open_with_args so that the latter sets
the PARAM_FLAG_CONST flag on the parameters that are passed into the portal,
while the former's behavior is unchanged. This should only affect the case
where the portal is executing an EXPLAIN; it will cause the generated plan to
look more like what would be generated if the portal were actually executing
the command being explained. Per gripe from Pavel.

Modified Files:
--------------
pgsql/src/backend/executor:
spi.c (r1.195 -> r1.196)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/executor/spi.c?r1=1.195&r2=1.196)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] explain doesn't work with execute using

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> 2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
>> This argument seems entirely bogus. How are they any more constant
>> than in the other case? The value isn't going to change for the life
>> of the portal in either case.

> this is true Tom, but problem is in EXPLAIN. I thing, so my and your
> solution are little bit incorect. We solve result, not reason. We have
> problem, bacause plan doesn't carry parameter's flags, and with
> EXPLAIN planner is called two times with different param's flags.

[ shrug... ] Well, I'm willing to change the code as you suggest,
but if you're thinking that this will make EXPLAIN exactly reproduce
the plan that would be generated for a plain SELECT invoked in the
same context, you're still mistaken. It doesn't account for the
effects of the fast-start-cursor option. And for what you seem to
want EXPLAIN to do here, it probably shouldn't. The whole thing
seems pretty unprincipled to me ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

On Sun, Jun 1, 2008 at 11:58 AM, Robert Hodges
<robert.hodges@continuent.com> wrote:
> Hi Merlin,
>
> My point here is that with reasonably small extensions to the core you can
> build products that are a lot better than SLONY. Triggers do not cover
> DDL, among other issues, and it's debatable whether they are the best way to
> implement quorum policies like Google's semi-synchronous replication. As I
> mentioned separately this topic deserves another thread which I promise to
> start.

These issues are much discussed and well understood. At this point,
the outstanding points of discussion are technical...how to make this
thing work.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] explain doesn't work with execute using

2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
> "Pavel Stehule" <pavel.stehule@gmail.com> writes:
>> 2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
>>> What do you think a "less invasive" patch would be, anyway? I don't
>>> buy that, say, having SPI_cursor_open_with_args set the flag but
>>> SPI_cursor_open not do so is any safer. There is no difference between
>>> the two as to what might get executed, so if there's a problem then
>>> both would be at risk.
>
>> SPI_cursor_open_with_args is new function, it's used only in FOR
>> EXECUTE statement - and in this context variables are really
>> constants.
>
> This argument seems entirely bogus. How are they any more constant
> than in the other case? The value isn't going to change for the life
> of the portal in either case.

this is true Tom, but problem is in EXPLAIN. I thing, so my and your
solution are little bit incorect. We solve result, not reason. We have
problem, bacause plan doesn't carry parameter's flags, and with
EXPLAIN planner is called two times with different param's flags.


>
> ISTM you're expecting EXPLAIN to behave in some magic way that has
> got little to do with "correctness".
>

It is first time when I do some with EXPLAIN and I don't understad
well, but I would correct EXPLAIN output. When original plan use
variables I would to see variables in plan and when plan use constant
I would to see constant.


> regards, tom lane
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Marginal improvements to the documentation for PGOPTIONS.

Log Message:
-----------
Marginal improvements to the documentation for PGOPTIONS.

Modified Files:
--------------
pgsql/doc/src/sgml:
libpq.sgml (r1.257 -> r1.258)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/libpq.sgml?r1=1.257&r2=1.258)
pgsql/doc/src/sgml/ref:
psql-ref.sgml (r1.206 -> r1.207)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/ref/psql-ref.sgml?r1=1.206&r2=1.207)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] explain doesn't work with execute using

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> 2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
>> What do you think a "less invasive" patch would be, anyway? I don't
>> buy that, say, having SPI_cursor_open_with_args set the flag but
>> SPI_cursor_open not do so is any safer. There is no difference between
>> the two as to what might get executed, so if there's a problem then
>> both would be at risk.

> SPI_cursor_open_with_args is new function, it's used only in FOR
> EXECUTE statement - and in this context variables are really
> constants.

This argument seems entirely bogus. How are they any more constant
than in the other case? The value isn't going to change for the life
of the portal in either case.

ISTM you're expecting EXPLAIN to behave in some magic way that has
got little to do with "correctness".

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] Securing Postgres

I am looking into installing the new version of postgres. Is there any documentation on how to secure it following DISAs guidelines to pass the SRR/STIGS.

 

Thank you,

Tim


[ADMIN] Securing Postgres

I am looking into installing the new version of postgres. Is there any documentation on how to secure it following DISAs guidelines to pass the SRR/STIGS.

 

Thank you,

Tim


Re: [HACKERS] Overhauling GUCS

Gregory Stark wrote:
>
> I think we do a pretty good job of this already. Witness things like
> effective_cache_size -- imagine if this were "nested_loop_cache_hit_rate" for
> example, good luck figuring out what to set it to.

I think either of these are fine if we describe how to measure
them. Ideally if we had a GUC that said "log_nested_loop_cache_hit_rate"
that enabled some timing code (understandably with lots of overhead) that
made an attempt to measure the hit rate, it'd be easier to figure out
than the effective cache size, no?

> The vacuum cost delay factors are probably ripe for such a recast though. I
> think we need just one parameter "vacuum_io_bandwidth" or something like that.

+1; though perhaps the inverse of that is more useful. When my
machines are idle I'd be happy if they vacuum more. Wouldn't
we be better served specifying the I/O bandwidth of each device/tablespace
and letting vacuum use whatever portion would be otherwise idle?

> The bgwriter parameters might also be a candidate but I'm less certain.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] explain doesn't work with execute using

2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
> "Pavel Stehule" <pavel.stehule@gmail.com> writes:
>> 2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
>>> This seems to be correctable with a one-line patch: make SPI_cursor_open
>>> set the CONST flag on parameters it puts into the portal (attached).
>>> I'm not entirely sure if it's a good idea or not --- comments?
>
>> We can do less invasive patch - it's much more ugly, but don't change
>> any other behave. I am afraid, so one-line patch can change behave of
>> explain statements in some cases where using variables is correct.
>
> If you can name a case where that is correct, then I'll worry about
> this, but offhand I don't see one.

this case - there variables are correct

postgres=# create or replace function foo(_a integer) returns void as
$$declare s varchar; begin for s in explain select * from o where a =
_a loop raise notice '%', s; end loop; end; $$ language plpgsql;
CREATE FUNCTION
Time: 43,138 ms
postgres=# select foo(20);
NOTICE: Index Scan using o_pkey on o (cost=0.00..8.27 rows=1 width=4)
NOTICE: Index Cond: (a = 20) -- wrong :(
foo
-----

(1 row)


>
> What do you think a "less invasive" patch would be, anyway? I don't
> buy that, say, having SPI_cursor_open_with_args set the flag but
> SPI_cursor_open not do so is any safer. There is no difference between
> the two as to what might get executed, so if there's a problem then
> both would be at risk.

SPI_cursor_open_with_args is new function, it's used only in FOR
EXECUTE statement - and in this context variables are really
constants.

Pavel

>
> regards, tom lane
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

Hi Merlin,

My point here is that with reasonably small extensions to the core you can build products that are a lot better than SLONY.   Triggers do not cover DDL, among other issues, and it’s debatable whether they are the best way to implement quorum policies like Google’s semi-synchronous replication.  As I mentioned separately this topic deserves another thread which I promise to start.  

It is of course possible to meet some of these needs with an appropriate client interface to WAL shipping.  There’s no a-priori reason why built-in PostgreSQL slaves need to be the only client.  I would put a vote in for covering this possibility in the initial replication design.  We are using a very similar approach in our own master/slave replication product.  

Thanks, Robert

P.S., No offense intended to Jan Wieck et al.  There are some pretty cool things in SLONY.  

On 5/29/08 8:16 PM, "Merlin Moncure" <mmoncure@gmail.com> wrote:

On Thu, May 29, 2008 at 3:05 PM, Robert Hodges
<robert.hodges@continuent.com> wrote:
> Third, you can't stop with just this feature.  (This is the BUT part of the
> post.)  The use cases not covered by this feature area actually pretty
> large.  Here are a few that concern me:
>
> 1.) Partial replication.
> 2.) WAN replication.
> 3.) Bi-directional replication.  (Yes, this is evil but there are problems
> where it is indispensable.)
> 4.) Upgrade support.  Aside from database upgrade (how would this ever
> really work between versions?), it would not support zero-downtime app
> upgrades, which depend on bi-directional replication tricks.
> 5.) Heterogeneous replication.
> 6.) Finally, performance scaling using scale-out over large numbers of
> replicas.  I think it's possible to get tunnel vision on this—it's not a big
> requirement in the PG community because people don't use PG in the first
> place when they want to do this.  They use MySQL, which has very good
> replication for performance scaling, though it's rather weak for
> availability.

These type of things are what Slony is for.  Slony is trigger based.
This makes it more complex than log shipping style replication, but
provides lots of functionality.

wal shipping based replication is maybe the fastest possible
solution...you are already paying the overhead so it comes virtually
for free from the point of view of the master.

mysql replication is imo nearly worthless from backup standpoint.

merlin



--
Robert Hodges, CTO, Continuent, Inc.
Email:  robert.hodges@continuent.com
Mobile:  +1-510-501-3728  Skype:  hodgesrm

Re: [HACKERS] explain doesn't work with execute using

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> 2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
>> This seems to be correctable with a one-line patch: make SPI_cursor_open
>> set the CONST flag on parameters it puts into the portal (attached).
>> I'm not entirely sure if it's a good idea or not --- comments?

> We can do less invasive patch - it's much more ugly, but don't change
> any other behave. I am afraid, so one-line patch can change behave of
> explain statements in some cases where using variables is correct.

If you can name a case where that is correct, then I'll worry about
this, but offhand I don't see one.

What do you think a "less invasive" patch would be, anyway? I don't
buy that, say, having SPI_cursor_open_with_args set the flag but
SPI_cursor_open not do so is any safer. There is no difference between
the two as to what might get executed, so if there's a problem then
both would be at risk.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] tipo de dato money, dejó de ser obsoleto?

Saludos...

Navegando por la documentación[1], me encontré que para el ti de dato money en la versión 8.3 ya no etsá la nota que dice que ese tipo está obsoleto... ¿fue reescrito?

[1]http://www.postgresql.org/docs/8.3/static/datatype-money.html#DATATYPE-MONEY-TABLE
 
Cesar A. Carbonara R.
Usuario Linux 377997
Debian
www.gulmer.org.ve




Enviado desde Correo Yahoo!
La bandeja de entrada más inteligente.

Re: [BUGS] BUG #4218: PGOPTIONS not documented for psql

"Netzach" <psql-webform-submitbug@netzach.co.il> writes:
> The psql man page and HTML documentation make no mention of the PGOPTIONS
> environment variable. It is only documented with regard to the 'postgres'
> command.

Hm? The psql man page's Environment section says

This utility, like most other PostgreSQL utilities, also uses the
environment variables supported by libpq (see Section 30.12).

and if you see that section, you'll find PGOPTIONS listed.

Now if your complaint were that the description of PGOPTIONS in 30.12
is uselessly scanty, I'd have to agree with that ...

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] explain doesn't work with execute using

*** ./src/backend/executor/spi.c.orig 2008-06-01 17:26:19.000000000 +0200
--- ./src/backend/executor/spi.c 2008-06-01 17:35:01.000000000 +0200
***************
*** 63,68 ****
--- 63,71 ----
static MemoryContext _SPI_execmem(void);
static MemoryContext _SPI_procmem(void);
static bool _SPI_checktuples(void);
+ static Portal _SPI_cursor_open(const char *name, SPIPlanPtr plan,
+ Datum *Values, const char *Nulls,
+ bool read_only, int pflags);


/* =================== interface functions =================== */
***************
*** 908,921 ****


/*
! * SPI_cursor_open()
! *
! * Open a prepared SPI plan as a portal
*/
! Portal
! SPI_cursor_open(const char *name, SPIPlanPtr plan,
Datum *Values, const char *Nulls,
! bool read_only)
{
CachedPlanSource *plansource;
CachedPlan *cplan;
--- 911,923 ----


/*
! * _SPI_cursor_open()
! * Open a prepared SPI plan as portal, allows set parameter's pflags
*/
! static Portal
! _SPI_cursor_open(const char *name, SPIPlanPtr plan,
Datum *Values, const char *Nulls,
! bool read_only, int pflags)
{
CachedPlanSource *plansource;
CachedPlan *cplan;
***************
*** 997,1003 ****
ParamExternData *prm = &paramLI->params[k];

prm->ptype = plan->argtypes[k];
! prm->pflags = 0;
prm->isnull = (Nulls && Nulls[k] == 'n');
if (prm->isnull)
{
--- 999,1005 ----
ParamExternData *prm = &paramLI->params[k];

prm->ptype = plan->argtypes[k];
! prm->pflags = pflags;
prm->isnull = (Nulls && Nulls[k] == 'n');
if (prm->isnull)
{
***************
*** 1130,1135 ****
--- 1132,1154 ----


/*
+ * SPI_cursor_open()
+ *
+ * Open a prepared SPI plan as a portal
+ */
+ Portal
+ SPI_cursor_open(const char *name, SPIPlanPtr plan,
+ Datum *Values, const char *Nulls,
+ bool read_only)
+ {
+ return _SPI_cursor_open(name, plan,
+ Values, Nulls,
+ read_only, 0);
+
+ }
+
+
+ /*
* SPI_cursor_open_with_args()
*
* Parse and plan a query and open it as a portal. Like SPI_execute_with_args,
***************
*** 1177,1183 ****
/* SPI_cursor_open expects to be called in procedure memory context */
_SPI_procmem();

! result = SPI_cursor_open(name, &plan, Values, Nulls, read_only);

/* And clean up */
_SPI_curid++;
--- 1196,1203 ----
/* SPI_cursor_open expects to be called in procedure memory context */
_SPI_procmem();

! /* all params has PARAM_FLAG_CONST flag */
! result = _SPI_cursor_open(name, &plan, Values, Nulls, read_only, PARAM_FLAG_CONST);

/* And clean up */
_SPI_curid++;
hello

2008/6/1 Tom Lane <tgl@sss.pgh.pa.us>:
> "Pavel Stehule" <pavel.stehule@gmail.com> writes:
>> I found following bug - using explain in stored procedures like:
>> ...
>> produce wrong result. Real plan is correct, etc variables are
>> substituted. Bud this explain show variables.
>
> This seems to be correctable with a one-line patch: make SPI_cursor_open
> set the CONST flag on parameters it puts into the portal (attached).
> I'm not entirely sure if it's a good idea or not --- comments?

We can do less invasive patch - it's much more ugly, but don't change
any other behave. I am afraid, so one-line patch can change behave of
explain statements in some cases where using variables is correct.

Regards
Pavel Stehule

>
> regards, tom lane
>
> Index: src/backend/executor/spi.c
> ===================================================================
> RCS file: /cvsroot/pgsql/src/backend/executor/spi.c,v
> retrieving revision 1.195
> diff -c -r1.195 spi.c
> *** src/backend/executor/spi.c 12 May 2008 20:02:00 -0000 1.195
> --- src/backend/executor/spi.c 1 Jun 2008 15:33:13 -0000
> ***************
> *** 997,1003 ****
> ParamExternData *prm = &paramLI->params[k];
>
> prm->ptype = plan->argtypes[k];
> ! prm->pflags = 0;
> prm->isnull = (Nulls && Nulls[k] == 'n');
> if (prm->isnull)
> {
> --- 997,1010 ----
> ParamExternData *prm = &paramLI->params[k];
>
> prm->ptype = plan->argtypes[k];
> ! /*
> ! * We mark the parameters as const. This has no effect for simple
> ! * execution of a plan, but if more planning happens within the
> ! * portal (eg via EXPLAIN), the effect will be to treat the
> ! * parameters as constants. This is good and correct as long as
> ! * no plan generated inside the portal is used outside it.
> ! */
> ! prm->pflags = PARAM_FLAG_CONST;
> prm->isnull = (Nulls && Nulls[k] == 'n');
> if (prm->isnull)
> {
>

Re: [HACKERS] explain doesn't work with execute using

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> I found following bug - using explain in stored procedures like:
> ...
> produce wrong result. Real plan is correct, etc variables are
> substituted. Bud this explain show variables.

This seems to be correctable with a one-line patch: make SPI_cursor_open
set the CONST flag on parameters it puts into the portal (attached).
I'm not entirely sure if it's a good idea or not --- comments?

regards, tom lane

Index: src/backend/executor/spi.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/executor/spi.c,v
retrieving revision 1.195
diff -c -r1.195 spi.c
*** src/backend/executor/spi.c 12 May 2008 20:02:00 -0000 1.195
--- src/backend/executor/spi.c 1 Jun 2008 15:33:13 -0000
***************
*** 997,1003 ****
ParamExternData *prm = &paramLI->params[k];

prm->ptype = plan->argtypes[k];
! prm->pflags = 0;
prm->isnull = (Nulls && Nulls[k] == 'n');
if (prm->isnull)
{
--- 997,1010 ----
ParamExternData *prm = &paramLI->params[k];

prm->ptype = plan->argtypes[k];
! /*
! * We mark the parameters as const. This has no effect for simple
! * execution of a plan, but if more planning happens within the
! * portal (eg via EXPLAIN), the effect will be to treat the
! * parameters as constants. This is good and correct as long as
! * no plan generated inside the portal is used outside it.
! */
! prm->pflags = PARAM_FLAG_CONST;
prm->isnull = (Nulls && Nulls[k] == 'n');
if (prm->isnull)
{

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-advocacy] Feedback on blog post about Replication Feature decision and its impact

On Fri, May 30, 2008 at 6:21 AM, Dirk Riehle <dirk@riehle.org> wrote:
> <a href="http://www.enterprisedb.com/">EnterpriseDB</a> is a well-funded
> database startup whose product builds on PostgreSQL. EnterpriseDB adds many
> "enterprise-readiness" features to the basic PostgreSQL product, including
> database replication, and much more. One might argue that it is not in the
> interest of EnterpriseDB to have replication added to PostgreSQL as it
> reduces the differentiation between the free community product and the more
> advanced commercial offering. Why pay for EnterpriseDB if you already get
> what you need from the free version? Won't adding replication to the core
> product reduce EnterpriseDBs sales? This tension seems only to get worse
> when you realize that EnterpriseDB employs several of the core developers of
> PostgreSQL, suggesting a direct conflict of interest when making decisions
> like whether to add replication or not.<p>

Having worked for EnterpriseDB for about three years now, and
understanding that there are several facets of our business model, I
can say that adding these types of features wouldn't hurt us at all.
In fact, they would make some of our work easier, and would be
welcomed.

--
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation | fax: 732.331.1301
499 Thornall Street, 2nd Floor | jonah.harris@enterprisedb.com
Edison, NJ 08837 | http://www.enterprisedb.com/

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

[BUGS] BUG #4218: PGOPTIONS not documented for psql

The following bug has been logged online:

Bug reference: 4218
Logged by: Netzach
Email address: psql-webform-submitbug@netzach.co.il
PostgreSQL version: <=8.3
Operating system: Documentation
Description: PGOPTIONS not documented for psql
Details:

The psql man page and HTML documentation make no mention of the PGOPTIONS
environment variable. It is only documented with regard to the 'postgres'
command.

I fear other environment variables may also lie hidden and undocumented.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [NOVICE] Running .psqlrc from a custom location

I believe you can create a user with the same name as the search_path
schema, so that when you login to the database with that user, and run
commands, the search path defaults to the same name as the user.

I have a system with several schemas and I use a specific USER for each
schema that has the same name as the schema.

ie. schema "demo", user "demo". Login as "demo" and the schema should
default to "demo". Now this may also require that you set the search
path parameter in your configuration file. It is here that you can tell
the system to look for USER, then PUBLIC schemas (in that order) when
someone logs in so the system knows which schema to point the newly
logged in user to.

Keep in mind, I am a novice as well, but this is what I did to get it
working.

Derrick

Netzach wrote:
> Hi folks,
>
> I use a hack to make psql automatically set search_path to the relevant
> schema for the dump directory I am currently working in. I achieved this
> by running a wrapper script that changed the HOME environment variable,
> setting a variable on the command line, and having the .psqlrc in the
> custom location use that variable to set the search path. I just
> upgraded to 8.1 from 7.4 and my hack stopped working - apparently psql
> does not use the $HOME variable any more.
>
> My scripts are included below. I do not wish to run these commands every
> time I use psql, only when I run it from the wrapper script. As far as I
> am aware there is no way of setting search_path/schema from the command
> line.
>
> If somebody could suggest an alternative way of causing psql to run a
> .psqlrc file from an arbitrary location, or even better, a way of
> manually specifying search_path when starting psql, I will be eternally
> indebted.
>
> Thanks,
>
> Netzach
>
> --- BEGIN specialdirectory/spsql.sh
> #!/bin/bash
>
> [ -f dbname.txt ] && PGDATABASE=`cat dbname.txt`
> [ -f schemaname.txt ] && SCHEMANAME=`(echo '-v schema='\`cat schemaname.txt\`)`
> [ -f encoding.txt ] && PGENCODING=`(echo '-v encoding='"\`cat encoding.txt\`")`
> [ -f user.txt ] && PGUSER=`(echo -U; cat user.txt; echo -h localhost)`
> PGHOST=`[ -f host.txt ] && (echo '-h'; cat host.txt) || \
> [ -f user.txt ] && echo ' -h localhost'`
>
> HOME=`dirname \`readlink $0 || echo $0\`` \
> psql $SCHEMANAME $PGENCODING $PGHOST $PGUSER $PGDATABASE
> --- END spsql.sh
>
> --- BEGIN specialdirectory/.psqlrc
> SET search_path TO :schema;
> SET client_encoding TO :encoding;
> --- END .psqlrc
>
>

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [PATCHES] partial header cleanup

Zdenek Kotala wrote:
> This replace xlog.h with xlogdefs.h in bufpage.h. All other changes are
> forgotten include somewhere. It reduce e.g. bloat to half in itup.h. But,
> There are still unresolved problems. htup should include bufpage.h,
> because it needs PageHeader size, but there is still unnecessary bufmgr.h
> include in bufpage which generates bloat.

I agree with this patch -- in fact I had done the same before PGCon and
then neglected it for some reason. (I think I was distracted trying to
get the struct RelationData definition out of rel.h, but that did not
turn out too well).

I was thinking maybe we need a third buffer manager header file. One
would have the current bufmgr.h, another would have the page stuff that
does not know about bufmgr.h (so most of current bufpage.h), and the
third one would be both plus the #define that needs both (which is
currently in bufpage.h). I am not sure what kind of fallout that
causes. Maybe that would help you too. We need to come up with a good
name for that file however ... bufmgrpage.h seems ugly.

--
Alvaro Herrera

http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches