Thursday, August 14, 2008

Re: [pgadmin-hackers] pgScript patch based on pgScript-1.0-beta-3

On Thu, Aug 14, 2008 at 2:26 PM, Mickael Deloison <mdeloison@gmail.com> wrote:

> I'm not sure to understand all the possibilities.
> By DLL, what do you mean? Do you say that pgScript code would be on
> pgFoundry (like it is right now) or would be on pgAdmin SVN. If on
> pgAdmin SVN, would it be compiled and included in pgAdmin when you
> compile pgAdmin? And with a DLL, isn't there a problem if you want to
> distribute pgAdmin binary with the PostgreSQL Windows distribution?
> Because the DLL should be distributed as well...

Integrated:

The source code is added as an integral part of the pgAdmin source
tree, fully integrated.

DLL:

The source code is added as a shared library to the pgAdmin source
tree. The main pgAdmin project then utilises that DLL. This allows the
pgScript DLL to be used by other applications, and may be
incrementally updated in branches within the SVN repository (we would
obviously give you commit rights to do that). This would require some
synchronisation of effort to ensure that you didn't change APIs in a
way that would cause us problems (we'd manage that through some
version numbering agreement).

Separate:

pgScript is maintained entirely outside of the pgAdmin source tree,
either on pgFoundry, pgadmin.org, Google Code or whereever. You
maintain it as you see fit, and we bundle your third party executable.


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [GENERAL] Newbie [CentOS 5.2] service postgresql initdb

On Thu, Aug 14, 2008 at 4:18 AM, Daneel <daneel-usenet@owoce.cz> wrote:
> Martin Marques wrote:
>>
>> Daneel escribió:
>>>
>>> Daneel wrote:
>>>>
>>>> While going through
>>>> http://wiki.postgresql.org/wiki/Detailed_installation_guides
>>>> and typing
>>>> service postgresql start
>>>> as root I got
>>>> "/var/lib/pgsql/data is missing. Use "service postgresql initdb" to
>>>> initialize the cluster first."
>>>>
>>>> When I run
>>>> service postgresql initdb
>>>> I get
>>>> "se: [FAILED]".
>>>> However, /var/lib/pqsql/data is created and user postgres owns it.
>>>>
>>>> But then I run
>>>> service postgresql start
>>>> and the very same error occurs..
>>>>
>>>> Daneel
>>>
>>> Shoud add that version is 8.3.1 and I've installed it using RPM
>>> packages... Thanks in advance for any tip...
>>
>> Where did you get the rpm packages?
>>
>
> I downloaded them from rpmfind.net They were Fedora 9 i386 version.

You need to use the version for RHEL, not fedora. There are versions
of 8.3 for RHEL3,4 and 5 on the postgresql ftp sites and that's what I
use.

> I reinstalled CentOS yesterday and during installation I checked to include
> PostgreSQL 8.1.11. Now it seems to work properly.

You should really explore running 8.3.3. It's much faster than 8.1
and has a few features that are really nice to have.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Join Removal/ Vertical Partitioning

I'm guessing it's this... looks pretty interesting even if not.

http://optimizermagic.blogspot.com/2008/06/why-are-some-of-tables-in-my-query.html

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgadmin-hackers] pgScript patch based on pgScript-1.0-beta-3

2008/8/14 Dave Page <dpage@pgadmin.org>:
> Well a lot of the argument around separating it was based on the idea
> that you would continue to work on it without being bound by the
> pgAdmin release cycle. I think that makes you one of major the
> decision makers - so what do you think we should do?
>
> FWIW, I don't have any major objection to any of the integration
> methods, though having it as a completely separate executable is
> probably my least favourite option.
>

I'm not sure to understand all the possibilities.
By DLL, what do you mean? Do you say that pgScript code would be on
pgFoundry (like it is right now) or would be on pgAdmin SVN. If on
pgAdmin SVN, would it be compiled and included in pgAdmin when you
compile pgAdmin? And with a DLL, isn't there a problem if you want to
distribute pgAdmin binary with the PostgreSQL Windows distribution?
Because the DLL should be distributed as well...

Mickael

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [HACKERS] compilig libpq with borland 5.5

On Thu, Aug 14, 2008 at 9:02 AM, claudio lezcano <claudiogmi@gmail.com> wrote:
> Thank you so much for the comments, he managed to advance the process of
> reconfiguring the directory compilation include Borland, however, has
> emerged another drawback, the problem has drawn up the following message:
>
> Error: Unresolved external '_pgwin32_safestat' referenced from C:\SOURCE
> POSTGRES 8.3\SRC\INTERFACES\LIBPQ\RELEASE\BLIBPQ.LIB|fe-connect
>
> Obs.: Static or dynamic libraries generated by the mvs can not be used to
> compile sources with bcc32, but works for MinGW and others, while trying to
> compile the following problem arises with bcc32:

You are correct about static libraries. If you have a 'COFF' (usually
microsoft in this context) static library, the only tool I know of to
get it working with the Borland stack is the Digital Mars COFF->OMF
converter, which works, and is the only way to go if you have a static
library which is not a stub for a dll (fully static) and don't
have/can't compile the source.

Dynamic libraries, however, can be shared between the compilers. You
can either load all the symbols with LoadLibrary, etc, or generate a
static library. Borland's implib.exe utility generates a static
library from any dll so you don't have to use LoadLibary, .etc, just
the header for the symbols from the library which you want to use.

Format problems between the compilers are one of the many reasons why
static libraries (except for statically loaded .dll) have fallen out
of favor...you hardly ever see them anymore.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] compilig libpq with borland 5.5

Thank you so much for the comments, he managed to advance the process of reconfiguring the directory compilation include Borland, however, has emerged another drawback, the problem has drawn up the following message:

Error: Unresolved external '_pgwin32_safestat' referenced from C:\SOURCE POSTGRES 8.3\SRC\INTERFACES\LIBPQ\RELEASE\BLIBPQ.LIB|fe-connect

Obs.: Static or dynamic libraries generated by the mvs can not be used to compile sources with bcc32, but works for MinGW and others, while trying to compile the following problem arises with bcc32:

Error: 'C: \ examples \ LIBPQ.LIB' contains invalid FMO record, type 0x21 (possibly COFF)
** error 2 ** deleting ".\Release\blibpq.dll"

Thanks in advance
Claudio Lezcano

[COMMITTERS] pgsql: pg_buffercache needs to be taught about relation forks, as Greg

Log Message:
-----------
pg_buffercache needs to be taught about relation forks, as Greg Stark
pointed out.

Modified Files:
--------------
pgsql/contrib/pg_buffercache:
pg_buffercache.sql.in (r1.7 -> r1.8)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/pg_buffercache/pg_buffercache.sql.in?r1=1.7&r2=1.8)
pg_buffercache_pages.c (r1.14 -> r1.15)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/pg_buffercache/pg_buffercache_pages.c?r1=1.14&r2=1.15)
pgsql/doc/src/sgml:
pgbuffercache.sgml (r2.2 -> r2.3)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/pgbuffercache.sgml?r1=2.2&r2=2.3)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [BUGS] BUG #4354: Text Type converted to Memo

On Thu, Aug 14, 2008 at 12:31:34PM +0000, Abuzar wrote:
>
> The following bug has been logged online:
>
> Bug reference: 4354
> Logged by: Abuzar
> Email address: Abuzer755@hotmail.com
> PostgreSQL version: 8.3.1
> Operating system: Windows xp SP2
> Description: Text Type converted to Memo
> Details:
>
> I used text, character varying type in my base,then i look it on
> form in Delphi so i so that these types are shown on form like
> memo.what is happening with it?

If this is a bug, it is on the Delphi side, but I don't think it is.

Delphi's memo type corresponds (roughly) to PostgreSQL's TEXT/VARCHAR
one, so it's your expectations that were off.

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [BUGS] BUG #4354: Text Type converted to Memo

In response to "Abuzar" <Abuzer755@hotmail.com>:
>
> The following bug has been logged online:
>
> Bug reference: 4354
> Logged by: Abuzar
> Email address: Abuzer755@hotmail.com
> PostgreSQL version: 8.3.1
> Operating system: Windows xp SP2
> Description: Text Type converted to Memo
> Details:
>
> I used text, character varying type in my base,then i look it on form in
> Delphi so i so that these types are shown on form like memo.what is
> happening with it?
> Please answer

I remember seeing something similar with FoxPro some time back.

If it's the same issue, then it stems from the fact that Delphi doesn't
have the exact same data types as PostgreSQL, and thus it must do some
mapping to get as close as possible. Memo is a pretty close match for
TEXT (at least it was in Fox).

This isn't a bug in PostgreSQL. I don't believe it's a bug at all, it's
just an interoperability quirk.

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[BUGS] BUG #4354: Text Type converted to Memo

The following bug has been logged online:

Bug reference: 4354
Logged by: Abuzar
Email address: Abuzer755@hotmail.com
PostgreSQL version: 8.3.1
Operating system: Windows xp SP2
Description: Text Type converted to Memo
Details:

I used text, character varying type in my base,then i look it on form in
Delphi so i so that these types are shown on form like memo.what is
happening with it?
Please answer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] pg_restore fails on Windows

Tom Tom wrote:
> Magnus Hagander wrote:
>> Tom Tom wrote:
>>>> Tom Tom wrote:
>>>>> Hello,
>>>>>
>>>>> We have a very strange problem when restoring a database on Windows XP.
>>>>> The PG version is 8.1.10
>>>>> The backup was made with the pg_dump on the same machine.
>>>>>
>>>>> pg_restore -F c -h localhost -p 5432 -U postgres -d "configV3" -v
>>>> "c:\Share\POSTGRES.backup"
>>>>> pg_restore: connecting to database for restore
>>>>> Password:
>>>>> pg_restore: creating SCHEMA public
>>>>> pg_restore: creating COMMENT SCHEMA public
>>>>> pg_restore: creating PROCEDURAL LANGUAGE plpgsql
>>>>> pg_restore: creating SEQUENCE hi_value
>>>>> pg_restore: executing SEQUENCE SET hi_value
>>>>> pg_restore: creating TABLE hibconfigelement
>>>>> pg_restore: creating TABLE hibrefconfigbase
>>>>> pg_restore: creating TABLE hibrefconfigreference
>>>>> pg_restore: creating TABLE hibtableattachment
>>>>> pg_restore: creating TABLE hibtableattachmentxmldata
>>>>> pg_restore: creating TABLE hibtableelementversion
>>>>> pg_restore: creating TABLE hibtableelementversionxmldata
>>>>> pg_restore: creating TABLE hibtablerootelement
>>>>> pg_restore: creating TABLE hibtablerootelementxmldata
>>>>> pg_restore: creating TABLE hibtableunversionedelement
>>>>> pg_restore: creating TABLE hibtableunversionedelementxmldata
>>>>> pg_restore: creating TABLE hibtableversionedelement
>>>>> pg_restore: creating TABLE hibtableversionedelementxmldata
>>>>> pg_restore: creating TABLE versionedelement_history
>>>>> pg_restore: creating TABLE versionedelement_refs
>>>>> pg_restore: restoring data for table "hibconfigelement"
>>>>> pg_restore: restoring data for table "hibrefconfigbase"
>>>>> pg_restore: restoring data for table "hibrefconfigreference"
>>>>> pg_restore: restoring data for table "hibtableattachment"
>>>>> pg_restore: restoring data for table "hibtableattachmentxmldata"
>>>>> pg_restore: [archiver (db)] could not execute query: no result from server
>>>>> pg_restore: *** aborted because of error
>>>>>
>>>>> The restore unexpectedly fails on hibtableattachmentxmldata table, which is
>> as
>>>> follows:
>>>>> CREATE TABLE hibtablerootelementxmldata
>>>>> (
>>>>> xmldata_id varchar(255) NOT NULL,
>>>>> xmldata text
>>>>> )
>>>>> WITHOUT OIDS;
>>>>>
>>>>> and contains thousands of rows with text field having even 40MB, encoded in
>>>> UTF8.
>>>>> The database is created as follows:
>>>>>
>>>>> CREATE DATABASE "configV3"
>>>>> WITH OWNER = postgres
>>>>> ENCODING = 'UTF8'
>>>>> TABLESPACE = pg_default;
>>>>>
>>>>>
>>>>> The really strange is that the db restore runs OK on linux (tested on
>> RHEL4,
>>>> PG version 8.1.9).
>>>>> The pg_restore output is _not_ very descriptive but I suspect some
>> dependency
>>>> on OS system libraries (encoding), or maybe it is also related to the size
>> of
>>>> the CLOB field. Anyway we are now effectively without any possibility to
>> backup
>>>> our database, which is VERY serious.
>>>>> Have you ever came across something similar to this?
>>>> Check what you have in your server logs (pg_log directory) and the
>>>> eventlog around this time. There is probably a better error message
>>>> available there.
>>>>
>>>> //Magnus
>>>>
>>> Thank you for your hint.
>>> The server logs does not display any errors, except for
>>>
>>> 2008-08-08 11:14:16 CEST LOG: checkpoints are occurring too frequently (14
>> seconds apart)
>>> 2008-08-08 11:14:16 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> 2008-08-08 11:14:38 CEST LOG: checkpoints are occurring too frequently (22
>> seconds apart)
>>> 2008-08-08 11:14:38 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> 2008-08-08 11:14:57 CEST LOG: checkpoints are occurring too frequently (19
>> seconds apart)
>>> 2008-08-08 11:14:57 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> 2008-08-08 11:15:14 CEST LOG: checkpoints are occurring too frequently (17
>> seconds apart)
>>> 2008-08-08 11:15:14 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> 2008-08-08 11:15:36 CEST LOG: checkpoints are occurring too frequently (22
>> seconds apart)
>>> 2008-08-08 11:15:36 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> 2008-08-08 11:15:56 CEST LOG: checkpoints are occurring too frequently (20
>> seconds apart)
>>> 2008-08-08 11:15:56 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> 2008-08-08 11:16:16 CEST LOG: checkpoints are occurring too frequently (20
>> seconds apart)
>>> 2008-08-08 11:16:16 CEST HINT: Consider increasing the configuration
>> parameter "checkpoint_segments".
>>> The warnings disappeared when the "checkpoint_segments" value was increased to
>> 10. The restore still failed however :(
>>> The Windows eventlogs show no errors, just informational messages about
>> starting/stopping the pg service.
>>
>> That's rather strange. There really should be *something* in the logs
>> there. Hmm.
>>
>> Does this happen for just this one dump, or does it happen for all dumps
>> you create on this machine (for example, can you dump single tables and
>> get those to come through - thus isolating the issue to one table or so)?
>>
>
> So after all I was able to isolate the issue to one table/one row. Now I have one small dump that (if trying to restore) positively fails on windows system (tested on 3 machines with winXP, PG 8.1.10) and passes through on Linux (tested on RHEL4, PG 8.1.9). Logs on the db side shows no relevant information, neither pg_restore.
> Seems that this is a base for a bug report.

Yup.
Can you set up a reproducible test-case that doesn't involve your data,
just the specific table definitions and test data?

If not, can you send me a copy of the dump (off-list) and I can see if I
can find something out from it.

//Magnus

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Strange query plan

the columns referenced in the predicate need to reference columns whichimplement indexes to avert FTS
Anyone else?
Martin
______________________________________________
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission.


> Date: Thu, 14 Aug 2008 14:57:09 +0400
> From: dteslenko@gmail.com
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Strange query plan
>
> Hello!
>
> I have following table:
>
> CREATE TABLE table1 (
> field1 INTEGER NOT NULL,
> field2 INTEGER NOT NULL,
> field3 CHARACTER(30),
> ... some more numeric fields)
>
> I have also those indexes:
>
> CREATE UNIQUE INDEX idx1 ON table1 USING btree (field3, field2, field1)
> CREATE INDEX idx2 ON table1 USING btree (field1, field3)
>
> Then I query this table with something like this:
>
> SELECT SUM(...) FROM table1 WHERE field3 = 'ABC' AND field1 <> 1
> GROUP BY field2
>
> And planner picks up a sequential scan of a table. Why does he?
>
> --
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
> A: Top-posting.
> Q: What is the most annoying thing in e-mail?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



Got Game? Win Prizes in the Windows Live Hotmail Mobile Summer Games Trivia Contest Find out how.

Re: [BUGS] BUG #4350: 'select' acess given to views containing "union all" even though user has no grants

Index: src/backend/optimizer/prep/prepjointree.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/optimizer/prep/prepjointree.c,v
retrieving revision 1.44
diff -c -r1.44 prepjointree.c
*** src/backend/optimizer/prep/prepjointree.c 4 Oct 2006 00:29:54 -0000 1.44
--- src/backend/optimizer/prep/prepjointree.c 14 Aug 2008 11:50:22 -0000
***************
*** 46,52 ****
static Node *pull_up_simple_union_all(PlannerInfo *root, Node *jtnode,
RangeTblEntry *rte);
static void pull_up_union_leaf_queries(Node *setOp, PlannerInfo *root,
! int parentRTindex, Query *setOpQuery);
static void make_setop_translation_lists(Query *query,
Index newvarno,
List **col_mappings, List **translated_vars);
--- 46,53 ----
static Node *pull_up_simple_union_all(PlannerInfo *root, Node *jtnode,
RangeTblEntry *rte);
static void pull_up_union_leaf_queries(Node *setOp, PlannerInfo *root,
! int parentRTindex, Query *setOpQuery,
! int childRToffset);
static void make_setop_translation_lists(Query *query,
Index newvarno,
List **col_mappings, List **translated_vars);
***************
*** 477,490 ****
{
int varno = ((RangeTblRef *) jtnode)->rtindex;
Query *subquery = rte->subquery;

/*
! * Recursively scan the subquery's setOperations tree and copy the leaf
! * subqueries into the parent rangetable. Add AppendRelInfo nodes for
! * them to the parent's append_rel_list, too.
*/
Assert(subquery->setOperations);
! pull_up_union_leaf_queries(subquery->setOperations, root, varno, subquery);

/*
* Mark the parent as an append relation.
--- 478,511 ----
{
int varno = ((RangeTblRef *) jtnode)->rtindex;
Query *subquery = rte->subquery;
+ int rtoffset;
+ List *rtable;

/*
! * Append the subquery rtable entries to upper query.
! */
! rtoffset = list_length(root->parse->rtable);
!
! /*
! * Append child RTEs to parent rtable.
! *
! * Upper-level vars in subquery are now one level closer to their
! * parent than before. We don't have to worry about offsetting
! * varnos, though, because any such vars must refer to stuff above the
! * level of the query we are pulling into.
! */
! rtable = copyObject(subquery->rtable);
! IncrementVarSublevelsUp_rtable(rtable, -1, 1);
! root->parse->rtable = list_concat(root->parse->rtable, rtable);
!
! /*
! * Recursively scan the subquery's setOperations tree and add
! * AppendRelInfo nodes for leaf subqueries to the parent's
! * append_rel_list.
*/
Assert(subquery->setOperations);
! pull_up_union_leaf_queries(subquery->setOperations, root, varno, subquery,
! rtoffset);

/*
* Mark the parent as an append relation.
***************
*** 500,540 ****
* Note that setOpQuery is the Query containing the setOp node, whose rtable
* is where to look up the RTE if setOp is a RangeTblRef. This is *not* the
* same as root->parse, which is the top-level Query we are pulling up into.
* parentRTindex is the appendrel parent's index in root->parse->rtable.
*/
static void
pull_up_union_leaf_queries(Node *setOp, PlannerInfo *root, int parentRTindex,
! Query *setOpQuery)
{
if (IsA(setOp, RangeTblRef))
{
RangeTblRef *rtr = (RangeTblRef *) setOp;
- RangeTblEntry *rte = rt_fetch(rtr->rtindex, setOpQuery->rtable);
- Query *subquery;
int childRTindex;
AppendRelInfo *appinfo;
- Query *parse = root->parse;
-
- /*
- * Make a modifiable copy of the child RTE and contained query.
- */
- rte = copyObject(rte);
- subquery = rte->subquery;
- Assert(subquery != NULL);
-
- /*
- * Upper-level vars in subquery are now one level closer to their
- * parent than before. We don't have to worry about offsetting
- * varnos, though, because any such vars must refer to stuff above the
- * level of the query we are pulling into.
- */
- IncrementVarSublevelsUp((Node *) subquery, -1, 1);

/*
! * Attach child RTE to parent rtable.
*/
! parse->rtable = lappend(parse->rtable, rte);
! childRTindex = list_length(parse->rtable);

/*
* Build a suitable AppendRelInfo, and attach to parent's list.
--- 521,546 ----
* Note that setOpQuery is the Query containing the setOp node, whose rtable
* is where to look up the RTE if setOp is a RangeTblRef. This is *not* the
* same as root->parse, which is the top-level Query we are pulling up into.
+ *
* parentRTindex is the appendrel parent's index in root->parse->rtable.
+ *
+ * The child RTEs have already been copied to the parent. childRToffset
+ * tells us where in the parent's range table they were copied.
*/
static void
pull_up_union_leaf_queries(Node *setOp, PlannerInfo *root, int parentRTindex,
! Query *setOpQuery, int childRToffset)
{
if (IsA(setOp, RangeTblRef))
{
RangeTblRef *rtr = (RangeTblRef *) setOp;
int childRTindex;
AppendRelInfo *appinfo;

/*
! * Calculate the index in the parent's range table
*/
! childRTindex = childRToffset + rtr->rtindex;

/*
* Build a suitable AppendRelInfo, and attach to parent's list.
***************
*** 566,573 ****
SetOperationStmt *op = (SetOperationStmt *) setOp;

/* Recurse to reach leaf queries */
! pull_up_union_leaf_queries(op->larg, root, parentRTindex, setOpQuery);
! pull_up_union_leaf_queries(op->rarg, root, parentRTindex, setOpQuery);
}
else
{
--- 572,581 ----
SetOperationStmt *op = (SetOperationStmt *) setOp;

/* Recurse to reach leaf queries */
! pull_up_union_leaf_queries(op->larg, root, parentRTindex, setOpQuery,
! childRToffset);
! pull_up_union_leaf_queries(op->rarg, root, parentRTindex, setOpQuery,
! childRToffset);
}
else
{
Index: src/backend/rewrite/rewriteManip.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/rewrite/rewriteManip.c,v
retrieving revision 1.102
diff -c -r1.102 rewriteManip.c
*** src/backend/rewrite/rewriteManip.c 4 Oct 2006 00:29:56 -0000 1.102
--- src/backend/rewrite/rewriteManip.c 14 Aug 2008 11:50:36 -0000
***************
*** 509,514 ****
--- 509,529 ----
0);
}

+ void
+ IncrementVarSublevelsUp_rtable(List *rtable, int delta_sublevels_up,
+ int min_sublevels_up)
+ {
+ IncrementVarSublevelsUp_context context;
+
+ context.delta_sublevels_up = delta_sublevels_up;
+ context.min_sublevels_up = min_sublevels_up;
+
+ range_table_walker(rtable,
+ IncrementVarSublevelsUp_walker,
+ (void *) &context,
+ 0);
+ }
+

/*
* rangeTableEntry_used - detect whether an RTE is referenced somewhere
Index: src/include/rewrite/rewriteManip.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/rewrite/rewriteManip.h,v
retrieving revision 1.42
diff -c -r1.42 rewriteManip.h
*** src/include/rewrite/rewriteManip.h 5 Mar 2006 15:58:58 -0000 1.42
--- src/include/rewrite/rewriteManip.h 14 Aug 2008 11:38:08 -0000
***************
*** 22,27 ****
--- 22,29 ----
int sublevels_up);
extern void IncrementVarSublevelsUp(Node *node, int delta_sublevels_up,
int min_sublevels_up);
+ extern void IncrementVarSublevelsUp_rtable(List *rtable,
+ int delta_sublevels_up, int min_sublevels_up);

extern bool rangeTableEntry_used(Node *node, int rt_index,
int sublevels_up);
Tom Lane wrote:
> Probably not. But it strikes me that there's another sin of omission
> here: function and values RTEs need to be tweaked too, because they
> contain expressions thst could have uplevel Vars in them. I'm not
> certain such RTEs could appear at top level in a UNION query, but I'm
> not sure they couldn't either.

Hmm. Maybe through a rewrite or something?

We should use range_table_walker, which knows how to descend into all
kinds of RTEs...

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

Re: [GENERAL] Strange query plan

On Thu, Aug 14, 2008 at 15:30, Peter Eisentraut <peter_e@gmx.net> wrote:
> Am Thursday, 14. August 2008 schrieb Dmitry Teslenko:
>> SELECT SUM(...) FROM table1 WHERE field3 = 'ABC' AND field1 <> 1
>> GROUP BY field2
>>
>> And planner picks up a sequential scan of a table. Why does he?
>
> Presumably because it thinks it is the best plan, and I see no reason to doubt
> that outright. You might get better performance with an index on field3.
>

Why then idx2 on field1 and field3 don't help here?

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] benchmark farm

Michael Holzman wrote:
> On Wed, Aug 13, 2008 at 7:09 PM, Jaime Casanova wrote:
>
>> any move in this?
>>
>
> I did some changes to pgbench in February and sent them to Andrew. No
> reaction has been got so far.
>
>

Oops. This completely got by me. I'll try to take a look at it RSN.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Postgres 8.3 is not using indexes

"Clemens Schwaighofer" <cs@tequila.co.jp> writes:

> Any tips why this is so?

They don't appear to contain the same data.
If they do have you run analyze recently?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Strange query plan

Am Thursday, 14. August 2008 schrieb Dmitry Teslenko:
> SELECT SUM(...) FROM table1 WHERE field3 = 'ABC' AND field1 <> 1
> GROUP BY field2
>
> And planner picks up a sequential scan of a table. Why does he?

Presumably because it thinks it is the best plan, and I see no reason to doubt
that outright. You might get better performance with an index on field3.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Postgres 8.3 is not using indexes

Am Thursday, 14. August 2008 schrieb Clemens Schwaighofer:
> Why is Postgres not using the indexes in the 8.3 installation.

Might have something to do with the removal of some implicit casts. You
should show us your table definitions.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] Seleccionar último registro entre un grupo

2008/8/14 Raúl Andrés Duque Murillo <ra_duque@yahoo.com.mx>:
> Cordial saludos compañeros. Tengo el siguiente problema y aunque lo he
> solucionado se me hace bastante pesado para la cantidad de registros que
> tengo, por lo cual quisiera saber si a alguien se le ocurre una mejor
> alternativa o algún artificio útil:
>
> Tengo una tabla más o menos así:
>
> id_parte anno mes valor
> 1 2005 1 5
> 1 2005 2 10
> 2 2008 5 20
> 2 2008 6 30
> 3 2008 4 40
>
> Lo que quiero es obtener el último valor (Anno/Mes) por cada parte. Para el
> ejemplo: la salida sería:
>
> id_parte anno mes valor
> 1 2005 2 10
> 2 2008 6 30
> 3 2008 4 40
>
> Por ahora lo que hago es algo de este estilo:
>
> SELECT tabla.id_parte, tabla.anno, tabla.mes, tabla.valor
> FROM (
> SELECT tabla.id_parte, MAX(tabla.anno*100 + tabla.mes)
> AS AnnoMes
> FROM tabla
> GROUP BY tabla.id_parte
> ) maxtabla INNER JOIN tabla ON tabla.id_parte =
> maxtabla.id_parte AND AnnoMes = (tabla.anno*100 + tabla.mes)
>
> Atentamente,
>
> RAUL DUQUE
> Bogotá, Colombia


Raul:

La semana pasada alguien tenia un problema similar, quiza si ves en el
historico de la lista te puede dar un indicio...

Slds.

----------------------
Slds.
jchavez
linux User #397972 on http://counter.li.org/
--
TIP 5: ¿Has leído nuestro extenso FAQ?
http://www.postgresql.org/docs/faqs.FAQ.html

Re: [HACKERS] Parsing of pg_hba.conf and authentication inconsistencies

Magnus Hagander wrote:

[about the ability to use different maps for ident auth, gss and krb
auth for example]

>>>> It wouldn't be very easy/clean to do that w/o breaking the existing
>>>> structure of pg_ident though, which makes me feel like using seperate
>>>> files is probably the way to go.

Actually, I may have to take that back. We already have support for
multiple maps in the ident file, I'm not really sure anymore of the case
where this wouldn't be enough :-)

That said, I still think we want to parse pg_hba in the postmaster,
because it allows us to not load known broken files, and show errors
when you actually change the file etc. ;-)

I did code up a POC patch for it, and it's not particularly hard to do.
Mostly it's just moving the codepath from the backend to the postmaster.
I'll clean it up a but and post it, just so ppl can see what it looks
like...

//Magnus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] Seleccionar último registro entre un grupo

Cordial saludos compañeros. Tengo el siguiente problema y aunque lo he solucionado se me hace bastante pesado para la cantidad de registros que tengo, por lo cual quisiera saber si a alguien se le ocurre una mejor alternativa o algún artificio útil:
 
Tengo una tabla más o menos así:
 
id_parte    anno    mes    valor
1              2005    1        5
1              2005    2        10
2              2008    5        20
2              2008    6        30
3              2008    4        40
 
Lo que quiero es obtener el último valor (Anno/Mes) por cada parte. Para el ejemplo: la salida sería:
 
id_parte    anno    mes    valor
1              2005    2        10
2              2008    6        30
3              2008    4        40  
 
Por ahora lo que hago es algo de este estilo:
 
SELECT tabla.id_parte, tabla.anno, tabla.mes, tabla.valor
FROM    (
                    SELECT tabla.id_parte, MAX(tabla.anno*100 + tabla.mes) AS AnnoMes
                    FROM tabla
                    GROUP BY tabla.id_parte
            ) maxtabla INNER JOIN tabla ON tabla.id_parte = maxtabla.id_parte AND AnnoMes = (tabla.anno*100 + tabla.mes)
 
Atentamente,
 
RAUL DUQUE
Bogotá, Colombia  

Re: [HACKERS] gsoc, oprrest function for text search take 2

Heikki Linnakangas wrote:
> Jan Urbański wrote:
>> So right now the idea is to:
>> (1) pre-sort STATISTIC_KIND_MCELEM values
>> (2) build an array of pointers to detoasted values in tssel()
>> (3) use binary search when looking for MCELEMs during tsquery analysis
>
> Sounds like a plan. In (2), it's even better to detoast the values
> lazily. For a typical one-word tsquery, the binary search will only look
> at a small portion of the elements.

Hm, how can I do that? Toast is still a bit black magic to me... Do you
mean I should stick to having Datums in TextFreq? And use DatumGetTextP
in bsearch() (assuming I'll get rid of qsort())? I wanted to avoid that,
so I won't detoast the same value multiple times, but it's true: a
binary search won't touch most elements.

> Another thing is, how significant is the time spent in tssel() anyway,
> compared to actually running the query? You ran pgbench on EXPLAIN,
> which is good to see where in tssel() the time is spent, but if the time
> spent in tssel() is say 1% of the total execution time, there's no point
> optimizing it further.

Changed to the pgbench script to
select * from manual where tsvector @@ to_tsquery('foo');
and the parameters to
pgbench -n -f tssel-bench.sql -t 1000 postgres

and got

number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 12.238282 (including connections establishing)
tps = 12.238606 (excluding connections establishing)

samples % symbol name
174731 31.6200 pglz_decompress
88105 15.9438 tsvectorout
17280 3.1271 pg_mblen
13623 2.4653 AllocSetAlloc
13059 2.3632 hash_search_with_hash_value
10845 1.9626 pg_utf_mblen
10335 1.8703 internal_text_pattern_compare
9196 1.6641 index_getnext
9102 1.6471 bttext_pattern_cmp
8075 1.4613 pg_detoast_datum_packed
7437 1.3458 LWLockAcquire
7066 1.2787 hash_any
6811 1.2325 AllocSetFree
6623 1.1985 pg_qsort
6439 1.1652 LWLockRelease
5793 1.0483 DirectFunctionCall2
5322 0.9631 _bt_compare
4664 0.8440 tsCompareString
4636 0.8389 .plt
4539 0.8214 compare_two_textfreqs

But I think I'll go with pre-sorting anyway, it feels cleaner and neater.
--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Strange query plan

Hello!

I have following table:

CREATE TABLE table1 (
field1 INTEGER NOT NULL,
field2 INTEGER NOT NULL,
field3 CHARACTER(30),
... some more numeric fields)

I have also those indexes:

CREATE UNIQUE INDEX idx1 ON table1 USING btree (field3, field2, field1)
CREATE INDEX idx2 ON table1 USING btree (field1, field3)

Then I query this table with something like this:

SELECT SUM(...) FROM table1 WHERE field3 = 'ABC' AND field1 <> 1
GROUP BY field2

And planner picks up a sequential scan of a table. Why does he?

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] stackbuilder - wizard: Count download stats using the existing

Log Message:
-----------
Count download stats using the existing postgresql.org infrastructure.

Modified Files:
--------------
wizard:
App.cpp (r1.27 -> r1.28)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/App.cpp.diff?r1=1.27&r2=1.28)
StackBuilder.cpp (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/StackBuilder.cpp.diff?r1=1.6&r2=1.7)
wizard/include:
Config.h (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/Config.h.diff?r1=1.4&r2=1.5)
StackBuilder.h (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/StackBuilder.h.diff?r1=1.6&r2=1.7)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] Newbie [CentOS 5.2] service postgresql initdb

Scott Marlowe wrote:
> On Tue, Aug 12, 2008 at 9:25 AM, Daneel <dan@dan.dan> wrote:
>> While going through
>> http://wiki.postgresql.org/wiki/Detailed_installation_guides
>> and typing
>> service postgresql start
>> as root I got
>> "/var/lib/pgsql/data is missing. Use "service postgresql initdb" to
>> initialize the cluster first."
>>
>> When I run
>> service postgresql initdb
>> I get
>> "se: [FAILED]".
>> However, /var/lib/pqsql/data is created and user postgres owns it.
>>
>> But then I run
>> service postgresql start
>> and the very same error occurs..
>
> Is /var/lib/pgsql/data a sym link to some other drive?

No, it didn't.

It's likely
> you're being bitten by SELinux. either disable it (google is your
> friend) or reconfigure it to allow postgres to access the other drive
> as a service.
>

I going to learn more about SELinux later after I learn more Linux
basics and get them in common for me.

Thank you anyway

Daneel

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Join Removal/ Vertical Partitioning

"Simon Riggs" <simon@2ndquadrant.com> writes:

> On Thu, 2008-06-26 at 13:42 -0400, Tom Lane wrote:
>> Simon Riggs <simon@2ndquadrant.com> writes:
>> > We can check for removal of a rel by...
>
> OT comment: I just found a blog about Oracle's optimizermagic, which is
> quite interesting. I notice there is a blog there about join removal,
> posted about 12 hours later than my original post. Seems to validate the
> theory anyway. Our posts have a wider audience than may be apparent :-)

Well turnabout's fair play... what's the URL?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] gsoc, oprrest function for text search take 2

Jan Urbański <j.urbanski@students.mimuw.edu.pl> writes:

> Heikki Linnakangas wrote:
>> Speaking of which, a lot of time seems to be spent on detoasting. I'd like to
>> understand that a better. Where is the detoasting coming from?
>
> Hmm, maybe bttext_pattern_cmp does some detoasting? It calls
> PG_GETARG_TEXT_PP(), which in turn calls pg_detoast_datum_packed(). Oh, and
> also I think that compare_lexeme_textfreq() uses DatumGetTextP() and that also
> does detoasting.

DatumGetTextP() will detoast packed data (ie, 1-byte length headers) whereas
DatumGetTextPP will only detoast compressed or externally stored data. I
suspect you're seeing the former.

> The root of all evil could by keeping a Datum in the TextFreq array, and not
> a "text *", which is something you pointed out earlier and I apparently
> didn't understand.

Well it doesn't really matter which type. If you store Datums which are
already detoasted then the DatumGetTextP and DatumGetTextPP will just be noops
anyways. If you store packed data (from DatumGetTextPP) then it's probably
safer to store it as Datums so if you need to pass it to any functions which
don't expect packed data they'll untoast it.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] WIP: patch to create explicit support for semi and anti joins

On Wed, 2008-08-13 at 23:12 -0400, Tom Lane wrote:

> We're just trying to provide better performance for certain common SQL
> idioms.

Sounds good, but can you explain how this will help? Not questioning it,
just after more information about it.

I'm half way through join removal patch, so this work might extend the
join elimination to semi/anti joins also (hopefully), or it might
(hopefully not) prevent the join elimination altogether. I'll let you
know how I get on.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Newbie [CentOS 5.2] service postgresql initdb

Martin Marques wrote:
> Daneel escribió:
>> Daneel wrote:
>>> While going through
>>> http://wiki.postgresql.org/wiki/Detailed_installation_guides
>>> and typing
>>> service postgresql start
>>> as root I got
>>> "/var/lib/pgsql/data is missing. Use "service postgresql initdb" to
>>> initialize the cluster first."
>>>
>>> When I run
>>> service postgresql initdb
>>> I get
>>> "se: [FAILED]".
>>> However, /var/lib/pqsql/data is created and user postgres owns it.
>>>
>>> But then I run
>>> service postgresql start
>>> and the very same error occurs..
>>>
>>> Daneel
>>
>> Shoud add that version is 8.3.1 and I've installed it using RPM
>> packages... Thanks in advance for any tip...
>
> Where did you get the rpm packages?
>

I downloaded them from rpmfind.net They were Fedora 9 i386 version.

I reinstalled CentOS yesterday and during installation I checked to
include PostgreSQL 8.1.11. Now it seems to work properly.

Daneel

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Newbie [CentOS 5.2] service postgresql initdb

Scott Marlowe wrote:
> PLEASE DON'T WRITE TO THIS LIST WITH A FAKE EMAIL ADDRESS.
>
> It's been discussed before, but it's rude and counterproductive. Just
> set up a filter / account that drops everything coming in, but don't
> stick the rest of us with your broken email behaviour

I'm sorry, just followed a guide setting up newsgroups. I've put it
right. I also didn't see that it could cause any difficulties to others.

Daneel

>
> On Tue, Aug 12, 2008 at 9:25 AM, Daneel <dan@dan.dan> wrote:
>> While going through
>> http://wiki.postgresql.org/wiki/Detailed_installation_guides
>> and typing
>> service postgresql start
>> as root I got
>> "/var/lib/pgsql/data is missing. Use "service postgresql initdb" to
>> initialize the cluster first."
>>
>> When I run
>> service postgresql initdb
>> I get
>> "se: [FAILED]".
>> However, /var/lib/pqsql/data is created and user postgres owns it.
>>
>> But then I run
>> service postgresql start
>> and the very same error occurs..
>>
>> Daneel
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Incorrect estimates on correlated filters

"Craig Ringer" <craig@postnewspapers.com.au> writes:

> It strikes me that there are really two types of query hint possible here.
>
> One tells the planner (eg) "prefer a merge join here".
>
> The other gives the planner more information that it might not otherwise
> have to work with, so it can improve its decisions. "The values used in
> this join condition are highly correlated".

This sounds familiar:

http://article.gmane.org/gmane.comp.db.postgresql.devel.general/55730/match=hints

Plus ça change...

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Join Removal/ Vertical Partitioning

On Thu, 2008-06-26 at 13:42 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > We can check for removal of a rel by...

OT comment: I just found a blog about Oracle's optimizermagic, which is
quite interesting. I notice there is a blog there about join removal,
posted about 12 hours later than my original post. Seems to validate the
theory anyway. Our posts have a wider audience than may be apparent :-)

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] gsoc, oprrest function for text search take 2

Jan Urbański wrote:
> So right now the idea is to:
> (1) pre-sort STATISTIC_KIND_MCELEM values
> (2) build an array of pointers to detoasted values in tssel()
> (3) use binary search when looking for MCELEMs during tsquery analysis

Sounds like a plan. In (2), it's even better to detoast the values
lazily. For a typical one-word tsquery, the binary search will only look
at a small portion of the elements.

Another thing is, how significant is the time spent in tssel() anyway,
compared to actually running the query? You ran pgbench on EXPLAIN,
which is good to see where in tssel() the time is spent, but if the time
spent in tssel() is say 1% of the total execution time, there's no point
optimizing it further.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] proposal sql: labeled function params

Hello

I propose enhance current syntax that allows to specify label for any
function parameter:

fcename(expr [as label], ...)
fcename(colname, ...)

I would to allow same behave of custom functions like xmlforest function:
postgres=# select xmlforest(a) from foo;
xmlforest
-----------
<a>10</a>
(1 row)

postgres=# select xmlforest(a as b) from foo;
xmlforest
-----------
<b>10</b>
(1 row)

Actually I am not sure what is best way for PL languages for acces to
these info. Using some system variables needed new column in pg_proc,
because collecting these needs some time and in 99% cases we don't
need it. So I prefere some system function that returns labels for
outer function call. Like

-- test
create function getlabels() returns varchar[] as $$select '{name,
age}'::varchar[]$$ language sql immutable;

create or replace function json(variadic varchar[])
returns varchar as $$
select '[' || array_to_string(
array(
select (getlabels())[i]|| ':' || $1[i]
from generate_subscripts($1,1) g(i))
,',') || ']'
$$ language sql immutable strict;

postgres=# select json('Zdenek' as name,'30' as age);
json
----------------------
[name:Zdenek,age:30]
(1 row)

postgres=# select json(name, age) from person;
json
----------------------
[name:Zdenek,age:30]
(1 row)

There are two possibilities
a) collect labels in parse time
b) collect labels in executor time

@a needs info in pg_proc, but it is simpler, @b is little bit
difficult, but doesn't need any changes in system catalog. I thinking
about b now.

Necessary changes:
=================
labels are searched in parse tree fcinfo->flinfo->fn_expr. I need
insert label into parse tree, so I it needs special node
labeled_param, For getting column reference I need to put current
exprstate to fcinfo. Function getlabels() should take code from
ExecEvalVar function.

Any notes, ideas?

Pavel Stehule

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] gsoc, oprrest function for text search take 2

Heikki Linnakangas wrote:
> Jan Urbański wrote:
>> Not good... Shall I try sorting pg_statistics arrays on text values
>> instead of frequencies?
>
> Yeah, I'd go with that. If you only do it for the new
> STATISTIC_KIND_MCV_ELEMENT statistics, you shouldn't need to change any
> other code.

OK, will do.

>> BTW: I just noticed some text_to_cstring calls, they came from
>> elog(DEBUG1)s that I have in my code. But they couldn't have skewn the
>> results much, could they?
>
> Well, text_to_cstring was consuming 1.1% of the CPU time on its own, and
> presumably some of the AllocSetAlloc overhead is attributable to that as
> well. And perhaps some of the detoasting as well.
>
> Speaking of which, a lot of time seems to be spent on detoasting. I'd
> like to understand that a better. Where is the detoasting coming from?

Hmm, maybe bttext_pattern_cmp does some detoasting? It calls
PG_GETARG_TEXT_PP(), which in turn calls pg_detoast_datum_packed(). Oh,
and also I think that compare_lexeme_textfreq() uses DatumGetTextP() and
that also does detoasting. The root of all evil could by keeping a Datum
in the TextFreq array, and not a "text *", which is something you
pointed out earlier and I apparently didn't understand.

So right now the idea is to:
(1) pre-sort STATISTIC_KIND_MCELEM values
(2) build an array of pointers to detoasted values in tssel()
(3) use binary search when looking for MCELEMs during tsquery analysis

Jan

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Postgres 8.3 is not using indexes

Hi,

i just stumbled on something very strange.

I have here a Postgres 8.3 and a Postgres 8.2 installation, as I am in
the process of merging. Both are from the debian/testing tree, both have
the same configuration file.

In my DB where I found out this trouble I have two tables, I do a very
simple join over both. The foreign key in the second table has an index.

Postgres 8.2 gives me this out:

explain SELECT DISTINCT email FROM email e, email_group eg WHERE
e.email_group_id = eg.email_group_i
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Unique (cost=65.16..66.81 rows=85 width=27)
-> Sort (cost=65.16..65.98 rows=330 width=27)
Sort Key: e.email
-> Merge Join (cost=0.00..51.35 rows=330 width=27)
Merge Cond: (eg.email_group_id = e.email_group_id)
-> Index Scan using email_group_pkey on email_group eg
(cost=0.00..12.91 rows=44 width=4)
-> Index Scan using idx_email_email_group_id on email e
(cost=0.00..34.21 rows=330 width=31)

Postgres 8.3 returns this:


explain SELECT DISTINCT email FROM email e, email_group eg WHERE
e.email_group_id = eg.email_group_id;
QUERY PLAN
---------------------------------------------------------------------------------------
Unique (cost=268688.95..274975.13 rows=51213 width=26)
-> Sort (cost=268688.95..271832.04 rows=1257236 width=26)
Sort Key: e.email
-> Hash Join (cost=2.12..85452.48 rows=1257236 width=26)
Hash Cond: (e.email_group_id = eg.email_group_id)
-> Seq Scan on email e (cost=0.00..68163.36
rows=1257236 width=30)
-> Hash (cost=1.50..1.50 rows=50 width=4)
-> Seq Scan on email_group eg (cost=0.00..1.50
rows=50 width=4)

I have reindexed the tables, vacuum (analyze) the whole DB, checked the
config if there are some settings different. But I am at a loss here.
Why is Postgres not using the indexes in the 8.3 installation.

I tried this on a different DB on the same server and on a different
server and I always get "seq_scan" back and never the usage of the index.

Any tips why this is so?

--
[ Clemens Schwaighofer -----=====:::::~ ]
[ IT Engineer/Manager ]
[ E-Graphics Communications, TEQUILA\ Japan IT Group ]
[ 6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN ]
[ Tel: +81-(0)3-3545-7706 Fax: +81-(0)3-3545-7343 ]
[ http://www.tequila.jp ]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] [PERFORM] autovacuum: use case for indenpedent TOAST table autovac settings

On Wed, 2008-08-13 at 21:30 -0400, Tom Lane wrote:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
> > Tom Lane wrote:
> >> It seems like we'll want to do it somehow. Perhaps the cleanest way is
> >> to incorporate toast-table settings in the reloptions of the parent
> >> table. Otherwise dump/reload is gonna be a mess.
>
> > My question is whether there is interest in actually having support for
> > this, or should we just inherit the settings from the main table. My
> > gut feeling is that this may be needed in some cases, but perhaps I'm
> > overengineering the thing.
>
> It seems reasonable to inherit the parent's settings by default, in any
> case. So you could do that now and then extend the feature later if
> there's real demand.

Yeh, I can't really see a reason why you'd want to treat toast tables
differently with regard to autovacuuming. It's one more setting to get
wrong, so no thanks.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] Design decision advice

On Thu, Aug 14, 2008 at 2:55 AM, Craig Ringer
<craig@postnewspapers.com.au> wrote:
> William Temperley wrote:

>> A. Two databases, one for transaction processing and one for
>> modelling. At arbitrary intervals (days/weeks/months) all "good" data
>> will be moved to the modelling database.
>> B. One database, where all records will either be marked "in" or
>> "out". The application layer has to exclude all data that is out.
>
> You could also exclude "out" data at the database level with appropriate
> use of (possibly updatable) views.
>
> If you put your raw tables in one schema and put your valid-data-only
> query views in another schema, you can set your schema search path so
> applications cannot see the raw tables containing not-yet-validated data.
>
> You also have the option of using materialized views, where a trigger
> maintains the "good" tables by pushing data over from the raw tables
> when it's approved.
>
> That gives you something between your options "A" and "B" to consider,
> at least.
>
> --
> Craig Ringer
>
>

Thanks Craig -

I didn't know about the search_path setting - a gem of knowlege. I'd
overlooked views too.

I'm using Django btw, which is great except for limited support for
multiple DBs, so the single DB option will be much easier.

Search_path gives me quite an elegant solution - I can direct my
read-only modelling users to their schema ($user), the modelling
schema, where views are kept.
Admin users get directed to their $user schema.

This leaves me with the views/materialised views question.
Oh yeah, and hacking Django to allow different DB users in one project.

Cheers,

Will

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Re: pg_restore fails on Windows

Magnus Hagander wrote:
> Tom Tom wrote:
> >> Tom Tom wrote:
> >>> Hello,
> >>>
> >>> We have a very strange problem when restoring a database on Windows XP.
> >>> The PG version is 8.1.10
> >>> The backup was made with the pg_dump on the same machine.
> >>>
> >>> pg_restore -F c -h localhost -p 5432 -U postgres -d "configV3" -v
> >> "c:\Share\POSTGRES.backup"
> >>> pg_restore: connecting to database for restore
> >>> Password:
> >>> pg_restore: creating SCHEMA public
> >>> pg_restore: creating COMMENT SCHEMA public
> >>> pg_restore: creating PROCEDURAL LANGUAGE plpgsql
> >>> pg_restore: creating SEQUENCE hi_value
> >>> pg_restore: executing SEQUENCE SET hi_value
> >>> pg_restore: creating TABLE hibconfigelement
> >>> pg_restore: creating TABLE hibrefconfigbase
> >>> pg_restore: creating TABLE hibrefconfigreference
> >>> pg_restore: creating TABLE hibtableattachment
> >>> pg_restore: creating TABLE hibtableattachmentxmldata
> >>> pg_restore: creating TABLE hibtableelementversion
> >>> pg_restore: creating TABLE hibtableelementversionxmldata
> >>> pg_restore: creating TABLE hibtablerootelement
> >>> pg_restore: creating TABLE hibtablerootelementxmldata
> >>> pg_restore: creating TABLE hibtableunversionedelement
> >>> pg_restore: creating TABLE hibtableunversionedelementxmldata
> >>> pg_restore: creating TABLE hibtableversionedelement
> >>> pg_restore: creating TABLE hibtableversionedelementxmldata
> >>> pg_restore: creating TABLE versionedelement_history
> >>> pg_restore: creating TABLE versionedelement_refs
> >>> pg_restore: restoring data for table "hibconfigelement"
> >>> pg_restore: restoring data for table "hibrefconfigbase"
> >>> pg_restore: restoring data for table "hibrefconfigreference"
> >>> pg_restore: restoring data for table "hibtableattachment"
> >>> pg_restore: restoring data for table "hibtableattachmentxmldata"
> >>> pg_restore: [archiver (db)] could not execute query: no result from server
> >>> pg_restore: *** aborted because of error
> >>>
> >>> The restore unexpectedly fails on hibtableattachmentxmldata table, which is
> as
> >> follows:
> >>> CREATE TABLE hibtablerootelementxmldata
> >>> (
> >>> xmldata_id varchar(255) NOT NULL,
> >>> xmldata text
> >>> )
> >>> WITHOUT OIDS;
> >>>
> >>> and contains thousands of rows with text field having even 40MB, encoded in
> >> UTF8.
> >>> The database is created as follows:
> >>>
> >>> CREATE DATABASE "configV3"
> >>> WITH OWNER = postgres
> >>> ENCODING = 'UTF8'
> >>> TABLESPACE = pg_default;
> >>>
> >>>
> >>> The really strange is that the db restore runs OK on linux (tested on
> RHEL4,
> >> PG version 8.1.9).
> >>> The pg_restore output is _not_ very descriptive but I suspect some
> dependency
> >> on OS system libraries (encoding), or maybe it is also related to the size
> of
> >> the CLOB field. Anyway we are now effectively without any possibility to
> backup
> >> our database, which is VERY serious.
> >>> Have you ever came across something similar to this?
> >> Check what you have in your server logs (pg_log directory) and the
> >> eventlog around this time. There is probably a better error message
> >> available there.
> >>
> >> //Magnus
> >>
> >
> > Thank you for your hint.
> > The server logs does not display any errors, except for
> >
> > 2008-08-08 11:14:16 CEST LOG: checkpoints are occurring too frequently (14
> seconds apart)
> > 2008-08-08 11:14:16 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> > 2008-08-08 11:14:38 CEST LOG: checkpoints are occurring too frequently (22
> seconds apart)
> > 2008-08-08 11:14:38 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> > 2008-08-08 11:14:57 CEST LOG: checkpoints are occurring too frequently (19
> seconds apart)
> > 2008-08-08 11:14:57 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> > 2008-08-08 11:15:14 CEST LOG: checkpoints are occurring too frequently (17
> seconds apart)
> > 2008-08-08 11:15:14 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> > 2008-08-08 11:15:36 CEST LOG: checkpoints are occurring too frequently (22
> seconds apart)
> > 2008-08-08 11:15:36 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> > 2008-08-08 11:15:56 CEST LOG: checkpoints are occurring too frequently (20
> seconds apart)
> > 2008-08-08 11:15:56 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> > 2008-08-08 11:16:16 CEST LOG: checkpoints are occurring too frequently (20
> seconds apart)
> > 2008-08-08 11:16:16 CEST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> >
> > The warnings disappeared when the "checkpoint_segments" value was increased to
> 10. The restore still failed however :(
> > The Windows eventlogs show no errors, just informational messages about
> starting/stopping the pg service.
>
> That's rather strange. There really should be *something* in the logs
> there. Hmm.
>
> Does this happen for just this one dump, or does it happen for all dumps
> you create on this machine (for example, can you dump single tables and
> get those to come through - thus isolating the issue to one table or so)?
>

So after all I was able to isolate the issue to one table/one row. Now I have one small dump that (if trying to restore) positively fails on windows system (tested on 3 machines with winXP, PG 8.1.10) and passes through on Linux (tested on RHEL4, PG 8.1.9). Logs on the db side shows no relevant information, neither pg_restore.
Seems that this is a base for a bug report.

Tomas

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] gsoc, oprrest function for text search take 2

Jan Urbański wrote:
> Not good... Shall I try sorting pg_statistics arrays on text values
> instead of frequencies?

Yeah, I'd go with that. If you only do it for the new
STATISTIC_KIND_MCV_ELEMENT statistics, you shouldn't need to change any
other code.

Hmm. There has been discussion on raising default_statistic_target, and
one reason why we've been afraid to do so has been that it increases the
cost of planning (there's some O(n^2) algorithms in there). Pre-sorting
the STATISTIC_KIND_MCV array as well, and replacing the linear searches
with binary searches would alleviate that, which would be nice.

> BTW: I just noticed some text_to_cstring calls, they came from
> elog(DEBUG1)s that I have in my code. But they couldn't have skewn the
> results much, could they?

Well, text_to_cstring was consuming 1.1% of the CPU time on its own, and
presumably some of the AllocSetAlloc overhead is attributable to that as
well. And perhaps some of the detoasting as well.

Speaking of which, a lot of time seems to be spent on detoasting. I'd
like to understand that a better. Where is the detoasting coming from?

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] stackbuilder - wizard: Tell open to wait for pkg installations to

Log Message:
-----------
Tell open to wait for pkg installations to complete.

Modified Files:
--------------
wizard:
App.cpp (r1.26 -> r1.27)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/App.cpp.diff?r1=1.26&r2=1.27)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-ru-general] как отключить foreign key

Добрый день,

Сделайте все FKs, которые хотите отключить на время, DEFERRED (см. ман
по foreign keys). Потом в DDL-транзакции в начале просто указывайте
SET CONSTRAINTS ALL DEFERRED; и целостность базы будет проверяться
только в самом конце при операции COMMIT.

--
Regards,
Ivan

2008/8/14 Shestakov Nikolay <nshestakov@naumen.ru>:
> Добрый день!
>
> При изменение структуры БД иногда требуется на время отключить foreign key.
> В oracle это делается так
>
> ALTER TABLE table MODIFY CONSTRAINT constraint ENABLE/DISABLE
>
> А как это делается на postgresql?
>
> --
> Sent via pgsql-ru-general mailing list (pgsql-ru-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-ru-general
>

--
Sent via pgsql-ru-general mailing list (pgsql-ru-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-ru-general

Re: [GENERAL] Referential integrity vulnerability in 8.3.3

Richard Huxton wrote, On 15-Jul-2008 15:19:
> Sergey Konoplev wrote:
>> Yes it is. But it the way to break integrity cos rows from table2
>> still refer to deleted rows from table1. So it conflicts with
>> ideology isn't it?
>
> Yes, but I'm not sure you could have a sensible behaviour-modifying
> BEFORE trigger without this loophole. Don't forget, ordinary users can't
> work around this - you need suitable permissions.
>
> You could rewrite PG's foreign-key code to check the referencing table
> after the delete is supposed to have taken place, and make sure it has.
> That's going to halve the speed of all your foreign-key checks though.

I did long ago.

For this to work you need to bypass the MVCC rules (to some extend). You
CANNOT do this with SQL statements, as there is no infrastructure for this.

For now you are bound to native foreign keys or triggers written in C
using (unsupported?) functions.

- Joris

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-ru-general] как отключить foreign key

Добрый день!

При изменение структуры БД иногда требуется на время отключить foreign
key. В oracle это делается так

ALTER TABLE table MODIFY CONSTRAINT constraint ENABLE/DISABLE

А как это делается на postgresql?

--
Sent via pgsql-ru-general mailing list (pgsql-ru-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-ru-general

[INTERFACES] PostgreSQL arrays and DBD

Hello.

I create a table:

CREATE TABLE groups (
  group_id serial PRIMARY KEY,
  name varchar(64) UNIQUE NOT NULL,
  guests integer[] DEFAULT '{}'
)

I add a new record to the table:

INSERT INTO groups (name) VALUES ('My friends');

Now the table contains 1 record:

| group_id |    name    | guests
+----------+------------+--------
|        1 | My friends | {}

I read the new record from the table using DBI:

my $sth = $dbh->prepare(qq/SELECT * FROM groups/);
$sth->execute();
my (@guests, $group);
push(@guests, $group) while $group = $sth->fetchrow_hashref(); # Line 4
print $guests[0]->{guests}->[0]; # Why ({group_id=>1, name=>'My friends', guests=>[0]}) ?

Output of the script:

Argument "" isn't numeric in null operation at ./guestmanager.pl line 4
0

DBD should return a reference to an empty array. But DBD returned the reference to the array containing 1 element (0). How can I have a different result:

({group_id=>1, name=>'My friends', guests=>[]})

PS
Version of DBD::Pg is 2.9.0 .

Re: [ADMIN] DB Dump Size

On Thu, Aug 14, 2008 at 12:06:53PM +1000, steve@outtalimits.com.au wrote:
> Hi all,
>
> I am curious as to why a pg dump of database "name" is 2.9gig. But is
> measured at 1.66gig by:
> SELECT pg_database_size(pg_database.datname) AS db_size FROM pg_database
> WHERE pg_database.datname='name' ;
>
> This dump was about 1 gig around 12 months ago.

Which options do you use for pg_dump? And what version of PosgreSQL are
you running?

In general it's not that strange for an uncompressed
dump to be larger than the database size, plain SQL
dumps are much less space efficient than a DBMS can
be when it stores the data on disk. But of course, there
also indices to consider.

Have you tried pg_dump -Fc?

> I am performing a monthly vacuum full on the database and a nightly vacuum
> all

That should only impact the pg_database_size.

[GENERAL] PostgreSQL arrays and DBD

Hello.

I create a table:

CREATE TABLE groups (
  group_id serial PRIMARY KEY,
  name varchar(64) UNIQUE NOT NULL,
  guests integer[] DEFAULT '{}'
)

I add a new record to the table:

INSERT INTO groups (name) VALUES ('My friends');

Now the table contains 1 record:

| group_id |    name    | guests
+----------+------------+--------
|        1 | My friends | {}

I read the new record from the table using DBI:

my $sth = $dbh->prepare(qq/SELECT * FROM groups/);
$sth->execute();
my (@guests, $group);
push(@guests, $group) while $group = $sth->fetchrow_hashref(); # Line 4
print $guests[0]->{guests}->[0]; # Why ({group_id=>1, name=>'My friends', guests=>[0]}) ?

Output of the script:

Argument "" isn't numeric in null operation at ./guestmanager.pl line 4
0

DBD should return a reference to an empty array. But DBD returned the reference to the array containing 1 element (0). How can I have a different result:

({group_id=>1, name=>'My friends', guests=>[]})

PS
Version of DBD::Pg is 2.9.0 .

Re: [GENERAL] cannot use result of (insert .. returning)

Hello

you can wrap INSERT STATEMENT into function. Than you can do anything
with result;

create table f(a timestamp);

postgres=# select * from (insert into f values(current_timestamp)
returning *) x where x.a > now();
ERROR: syntax error at or near "into"
LINE 1: select * from (insert into f values(current_timestamp) retur...
^
create or replace function if() returns setof f as $$begin return
query insert into f values(current_timestamp) returning *; return;
end$$ language plpgsql;

postgres=# select * from if() where a > now();
a
---
(0 rows)

regards
Pavel Stehule

2008/8/14 Dale Harris <itsupport@jonkers.com.au>:
> Hello,
>
>
>
> I'm having the same issues as dvs had in message thread
> http://archives.postgresql.org/pgsql-general/2008-05/msg01117.php as I want
> to be able to use the result from an INSERT INTO table(...) VALUES(...)
> RETURNING new_row_ID.
>
>
>
> I would ideally like to be able to capture the RETURNING value into a
> variable to use immediately. Does anyone have a solution?
>
>
>
> Dale.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pginstaller - pginst: Increase some char array sizes that are arguably

Log Message:
-----------
Increase some char array sizes that are arguably too small

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pginst/ca:
pginstca.c (r1.119.2.6 -> r1.119.2.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pginstaller/pginst/ca/pginstca.c.diff?r1=1.119.2.6&r2=1.119.2.7)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers