Sunday, June 8, 2008

[COMMITTERS] pgsql: Fix typo in comment.

Log Message:
-----------
Fix typo in comment.

Modified Files:
--------------
pgsql/src/backend/storage/lmgr:
proc.c (r1.199 -> r1.200)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/storage/lmgr/proc.c?r1=1.199&r2=1.200)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[pgsql-www] Message-ID should surely not be shown as a mailto: URL

So looking at page source for, eg,
http://archives.postgresql.org/pgsql-hackers/2008-06/msg00338.php

I see

<li><strong>From</strong>: Tom Lane &lt;<a href="mailto:tgl@DOMAIN.HIDDEN">tgl(at)sss(dot)pgh(dot)pa(dot)us</a>&gt;</li>
<li><strong>To</strong>: <a href="mailto:pgsql-hackers@DOMAIN.HIDDEN">pgsql-hackers(at)postgresql(dot)org</a></li>
<li><strong>Subject</strong>: Re: We have a launch abort ... PG update releases will be delayed</li>
<li><strong>Date</strong>: Sat, 07 Jun 2008 11:45:56 -0400</li>
<li><strong>Message-id</strong>: &lt;<a href="mailto:20506.1212853556@DOMAIN.HIDDEN">20506(dot)1212853556(at)sss(dot)pgh(dot)pa(dot)us</a>&gt;</li>

This seems outright silly. In the first place, I do not see the value
of displaying mailto: URLs containing intentionally-broken addresses.
In the second, even if the domain names were fixed, there is no way on
god's green earth that mailto: a Message-ID is going to work.

Hm, I wonder if sloppiness of this sort accounts for the remarkable
prevalence in my mail logs of spam-sign like this:

Jun 9 00:44:06 sss2 sm-mta[4062]: m594i5Ns004062: <19570.1142971720@sss.pgh.pa.us>... User unknown

I don't pretend to know what is the approved way to deal with these
issues, but *this* can't be best practice.

regards, tom lane

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] libpq support for arrays and composites

Andrew Chernow wrote:
>
> libpqtypes already implemented this. It is a different approach but
> provides the same functionality; with the inclusion of being able to
> handle every data type. libpqtypes uses the PGresult API for
> composites and arrays, rather than adding a new set of functions.
>
>

Yes, I thought you'd say that :-)

This has some significant limitations - for example (quoting from your docs)

Arrays are only handled using binary format. This means that any type
used as an array element must be put and gotten in binary format. If a
user-defined type does not implement a send and recv function in the
backend, it can not be used as an array element.

That makes it quite useless for my intended purpose.

I also am not particularly enamoured of the libpqtypes way of doing things, which feels rather foreign to me.

Lastly, the idea is to provide extra facilities to libpq clients without requiring any extra library.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] handling TOAST tables in autovacuum

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Euler Taveira de Oliveira wrote:
>> And based on your proposal, it'll be needed to add reloptions to toast
>> tables too. IMO, we should keep that code as simple as possible.

> Sure, what's the problem with that? We only need to make sure that
> ALTER TABLE works for setting reloptions for toast tables.

... actually, the problem is going to be "how do you get pg_dump to dump
and reload such settings"? The toast tables are not going to have the
same names after dump/reload.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] libpq support for arrays and composites

Andrew Dunstan wrote:
>
> One of the areas where libpq seems to be severely lacking is in handling
> arrays and composites in query results. I'd like to set about rectifying
> that.
>
> Ideally this would mean that drivers using libpq could easily and
> reliably deliver such objects suitably structured in their particular
> languages (e.g. lists and hashes in Perl).
>
> One complicating factor I see is that there is no protocol level support
> for anything other than simple objects - each data value is simply a
> stream of bytes of a known length. We would therefore need some pretty
> robust processing to pick apart structured objects.
>
> We'll need a new API to handle such objects. I'm thinking of something
> like:
>
> PQarray * PQgetArray( const PGresult *res, int row_number, int
> column_number);
> int PQgetArrayNDims(PQarray * array);
> int PQgetArrayLower(PQarray * array, int dim);
> int PQgetArrayUpper(PQarray * array, int dim);
> int PQgetArrayElementLength(PQarray * array, int dim1, ...);
> bool PQgetArrayElementIsNull(PQarray * array, int dim1, ...);
> char * PQgetArrayElement(PQarray * array, int dim1, ...);
>
> PQcomposite * PQgetComposite(const PGresult *res, int row_number, int
> column_number);
> PQcomposite * PQgetArrayElementComposite(PQarray * array, int dim1, ...);
> int PQgetCompositeNFields(PQcomposite * composite);
> char * PQgetCompositeFName(PQcomposite * composite, int fnumber);
> int PQgetCompositeFNumber(PQcomposite * composite, char * fname);
> Oid PQgetCOmpositeFType(PQcomposite * composite, int fnumber);
> int PQgetCompositeFieldLength(PQcomposite * , int fnumber);
> bool PQgetCompositeFieldIsNull(PQcomposite * composite, int fnumber);
> char * PQgetCompositeField(PQcomposite * composite, int fnumber);
>
> Not sure if we need analogs for PQfformat, PQfmod or PQfsize - I suspect
> not, but right now I'm just thinking out loud.
>
> Thoughts? Is this worth doing?
>
> cheers
>
> andrew
>
>
>

libpqtypes already implemented this. It is a different approach but provides
the same functionality; with the inclusion of being able to handle every data
type. libpqtypes uses the PGresult API for composites and arrays, rather than
adding a new set of functions.

To support this, one must be able to convert all data types (unless you are only
supporting text results) because composites can be made up of any data type.

Simple arrays:
http://libpqtypes.esilo.com/man3/pqt-specs.html#array

Composite arrays:
http://libpqtypes.esilo.com/man3/pqt-composites.html

EXAMPLE OF GETTING A COMPOSITE:
(taken from http://libpqtypes.esilo.com/ home page)

/* Let's get a composite.
* CREATE TYPE simple AS (a int4, t text);
*/
PGint4 i4;
PGtext text;
PGresult *res, *simple;
int resultFormat = 1;

/* Your composites need to be registered */
PQregisterTypeHandler(conn, "simple", NULL, NULL);

/* 2nd arg, PGparam, can be NULL if there are no query params.
* Composites require binary results, so we can't use PQexec().
*/
res = PQparamExec(conn, NULL, "SELECT my_simple FROM t", resultFormat);
if(!res)
fprintf(stderr, "ERROR: %s\n", PQgeterror());

/* Get the simple composite, which is exposed as a PGresult. */
PQgetf(res, 0, "%simple", 0, &simple);
PQclear(res); /* no longer needed */

/* Get the simple composite attributes from the simple result.
* Reference fields by name by using a '#' rather than a '%'.
* The field names are the composite attribute names.
*/
PQgetf(simple, 0, "#int4 #text", "a", &i4, "t", &text);
PQclear(simple);

--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] handling TOAST tables in autovacuum

Euler Taveira de Oliveira wrote:

> FYI, I have a WIP patch to remove pg_autovacuum in favor of reloptions.

Really? Please send it my way to review/apply as soon as you have it
ready, independently of what we do with toast tables.

> Let's keep it simple. Why not just adding a toast_enabled flag (disabled
> by default) in pg_autovacuum? If it's set then main and toast tables are
> processed by autovac.

Actually I think your proposal is more cumbersome to use and less
flexible, because you can't set specific values for the other options
for toast tables.

> And based on your proposal, it'll be needed to add reloptions to toast
> tables too. IMO, we should keep that code as simple as possible.

Sure, what's the problem with that? We only need to make sure that
ALTER TABLE works for setting reloptions for toast tables.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [CORE] [HACKERS] Automating our version-stamping a bit better

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- --On Sunday, June 08, 2008 21:27:03 -0400 Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Andrew Dunstan <andrew@dunslane.net> writes:
>> Tom Lane wrote:
>>> I'm tempted to suggest letting the script invoke autoconf, too,
>>> but that would require standardizing where to find the correct
>>> version of autoconf for each branch; so it might not be such a
>>> great idea.
>
>> Unfortunately that's true. Maybe we could agree on using an alias for
>> the right version of autoconf, but it seems likely to be error prone.
>
> Actually, the way I do things is that my setup script for working
> with each particular version tree includes adjusting $PATH so that
> the right autoconf gets found just by saying "autoconf". If everyone
> who might tag releases wanted to do it the same way, then we could
> just let the script say "autoconf". But I'm not sure anybody else
> likes that plan. What I was thinking was just to have the script
> print out something like
>
> Tagged tree as 8.3.4
> Don't forget to run autoconf 2.59 before committing

I like that one ...

- --
Marc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)
Email . scrappy@hub.org MSN . scrappy@hub.org
Yahoo . yscrappy Skype: hub.org ICQ . 7615664
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkhMj4MACgkQ4QvfyHIvDvNWAACfeEuX8PCwbPgZLutpya859T+5
sDYAoKgTnLoypgDOwr4TSYVd+G5Dn+kn
=Cl6d
-----END PGP SIGNATURE-----


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] handling TOAST tables in autovacuum

Alvaro Herrera wrote:

> We've been making noises about dealing with TOAST tables as separate
> entities in autovacuum for some time now. So here's a proposal:
>
Let's keep it simple. Why not just adding a toast_enabled flag (disabled
by default) in pg_autovacuum? If it's set then main and toast tables are
processed by autovac.

FYI, I have a WIP patch to remove pg_autovacuum in favor of reloptions.
And based on your proposal, it'll be needed to add reloptions to toast
tables too. IMO, we should keep that code as simple as possible.


--
Euler Taveira de Oliveira

http://www.timbira.com/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Automating our version-stamping a bit better

Andrew Dunstan <andrew@dunslane.net> writes:
> Tom Lane wrote:
>> I'm tempted to suggest letting the script invoke autoconf, too,
>> but that would require standardizing where to find the correct
>> version of autoconf for each branch; so it might not be such a
>> great idea.

> Unfortunately that's true. Maybe we could agree on using an alias for
> the right version of autoconf, but it seems likely to be error prone.

Actually, the way I do things is that my setup script for working
with each particular version tree includes adjusting $PATH so that
the right autoconf gets found just by saying "autoconf". If everyone
who might tag releases wanted to do it the same way, then we could
just let the script say "autoconf". But I'm not sure anybody else
likes that plan. What I was thinking was just to have the script
print out something like

Tagged tree as 8.3.4
Don't forget to run autoconf 2.59 before committing

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Automating our version-stamping a bit better

Tom Lane wrote:
>
> I'm tempted to suggest letting the script invoke autoconf, too,
> but that would require standardizing where to find the correct
> version of autoconf for each branch; so it might not be such a
> great idea.
>
>

Unfortunately that's true. Maybe we could agree on using an alias for
the right version of autoconf, but it seems likely to be error prone.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: tag 8.1.13

Log Message:
-----------

tag 8.1.13

Tags:
----
REL8_1_STABLE

Modified Files:
--------------
pgsql:
configure (r1.461.2.24 -> r1.461.2.25)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure?r1=1.461.2.24&r2=1.461.2.25)
configure.in (r1.431.2.25 -> r1.431.2.26)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure.in?r1=1.431.2.25&r2=1.431.2.26)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[HACKERS] Automating our version-stamping a bit better

So while tagging the upcoming releases, I got annoyed once again about
what a tedious, error-prone bit of donkeywork it is. You've got to find
and update the sub-version numbers, and *not* any chance occurrence of
the same strings (eg s/20/21/g for version 7.4.21 would've mangled some
copyright dates). And the changes tend to move around a little bit in
each back branch, making it even easier to blow it. ISTM we should get
the machine to do it for us.

I propose to write a little perl script to be used like this:

cd top-level-of-tree
sh src/tools/version_stamp 22
cvs commit -m "Stamp release 7.4.22"

The script takes just one argument, which could be "devel", "betaN",
"rcN", or just a minor version number "N". Note the assumption that the
script knows the major version. Since we expect to adjust the script
from time to time for version changes anyway, I don't see why we
shouldn't have the major version stored right in the script. Tagging a
new development branch after a release is split off would then look like

cd src/tools
edit version_stamp, update a variable assignment at its head
cvs commit -m "Update version_stamp for 8.5"
cd ../..
sh src/tools/version_stamp devel
cvs commit -m "Stamp CVS HEAD as 8.5devel"

Note that this is not all that helpful if we just do it in CVS HEAD.
I propose adding the script to all active branches back to 7.4, with
suitable adjustments for each branch as needed.

I think we should probably include configure.in in the set of files
that this script updates, and get rid of the current two-step
arrangement where Marc stamps configure.in/configure after somebody
else stamps everything else. Marc's tarball-wrapping process would
thus look roughly like

sh src/tools/version_stamp 4
autoconf
cvs commit -m "Stamp release 8.3.4"
cvs tag REL8_3_4
... build tarball ...

I'm tempted to suggest letting the script invoke autoconf, too,
but that would require standardizing where to find the correct
version of autoconf for each branch; so it might not be such a
great idea.

Thoughts, objections?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] pg_dump restore time and Foreign Keys

On Sunday 08 June 2008 20:12:15 Tom Lane wrote:
> Robert Treat <xzilla@users.sourceforge.net> writes:
> > and i'm sure no one is against that idea, but you're never going to be
> > able to match the performance of just avoiding the check.
>
> We'll never be able to match the performance of not having transactions,
> either, but the community has never for a moment considered having a
> "no transactions" mode.
>

it's unclear what a "no transaction" mode would mean, but I'd be willing to
guess some people have consider aspects of it (we've just never had
agreement)

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: tag 8.3.3

Log Message:
-----------

tag 8.3.3

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pgsql:
configure (r1.578.2.6 -> r1.578.2.7)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure?r1=1.578.2.6&r2=1.578.2.7)
configure.in (r1.546.2.5 -> r1.546.2.6)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure.in?r1=1.546.2.5&r2=1.546.2.6)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: tag 8.2.9

Log Message:
-----------

tag 8.2.9

Tags:
----
REL8_2_STABLE

Modified Files:
--------------
pgsql:
configure (r1.523.2.17 -> r1.523.2.18)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure?r1=1.523.2.17&r2=1.523.2.18)
configure.in (r1.490.2.18 -> r1.490.2.19)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure.in?r1=1.490.2.18&r2=1.490.2.19)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] libpq support for arrays and composites

Tom Lane wrote:
> Andrew Dunstan <andrew@dunslane.net> writes:
>
>> One complicating factor I see is that there is no protocol level support
>> for anything other than simple objects - each data value is simply a
>> stream of bytes of a known length. We would therefore need some pretty
>> robust processing to pick apart structured objects.
>>
>
> Well, it's not that hard, the quoting/escaping rules for array_out and
> record_out are well defined.
>
> Are you intending that these operations support both text and binary
> results?
>

I'm a bit open on that.

> The array accessors with ... parameter lists strike me as a bit
> dangerous, because there is no way at all to verify that the caller is
> passing the expected number of dimensions. Can't that be made tighter?
>

Well, the only alternative I can think of is to make the client walk the
array one dimension at a time. Something like:

PQarray * PQgetInnerArray(PQarray * array, int dim);

then when we're down to the leaf level, we could have:

int PQgetArrayElementLength(PQarray * array, int dim);
bool PQgetArrayElementIsNull(PQarray * array, int dim);
char * PQgetArrayElement(PQarray * array, int dim);

That strikes me as somewhat more cumbersome, so I guess the question is
whether it's worth it. It probably fits the slightly clunky feel of libpq.

> Also you need to spell out the error handling conventions for each of
> these.
>
> I think you missed some "free()" operations.
>


Oh, yes, both of these are certainly true. This isn't really even a
proposal yet, more a sketch that would lead to a proposal. I'm hoping to
get some other input too, before settling this down, especially from
driver writers.
> It might also be useful to provide some functions that form an array or
> composite value from per-element strings, ie, the converse of the
> de-construction routines. Here I'd be happy to skip the binary case.
>
>
>

Yeah, that had occurred to me. Will think about it more, although it
could possibly be done as a separate project, too.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] handling TOAST tables in autovacuum

Alvaro Herrera <alvherre@commandprompt.com> writes:
> The point here is that if the user disables autovac for the main table,
> then it's expected that it is automagically disabled for the toast table
> as well, for the usual case where they are disabling it because the
> table is too big.

Hmm, good point. OK, two passes it is. (I thought about remembering
the toast table rows in memory so as not to scan the catalog twice,
but I'm not sure you really save much that way.)

Another thing to think about here is locking: I believe you need to get
a vacuum-type lock on the parent table not only the toast table, so
vacuuming a toast table without any knowledge of which table is its
parent ain't gonna fly anyway.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: tag 8.0.17

Log Message:
-----------
tag 8.0.17

Tags:
----
REL8_0_STABLE

Modified Files:
--------------
pgsql:
configure (r1.424.4.24 -> r1.424.4.25)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure?r1=1.424.4.24&r2=1.424.4.25)
configure.in (r1.398.4.25 -> r1.398.4.26)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure.in?r1=1.398.4.25&r2=1.398.4.26)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: tag 7.4.21

Log Message:
-----------
tag 7.4.21

Tags:
----
REL7_4_STABLE

Modified Files:
--------------
pgsql:
configure (r1.310.2.32 -> r1.310.2.33)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure?r1=1.310.2.32&r2=1.310.2.33)
configure.in (r1.301.2.31 -> r1.301.2.32)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/configure.in?r1=1.301.2.31&r2=1.301.2.32)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] handling TOAST tables in autovacuum

Tom Lane wrote:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
> > The only change of some consideration is that we will need two passes
> > over pg_class to get the list of relations to vacuum, instead of one as
> > we do currently. The problem is that we first need to fetch the
> > (heap relid, toast relid) mapping before attempting to figure out if any
> > given TOAST table needs vacuuming. This is because we want to be using
> > the main table's pg_autovacuum, and we can't get at that unless we know
> > the main relid.
>
> Umm ... is it chiseled in stone someplace that toast tables shouldn't
> have their own pg_autovacuum entries? Seems like that might be a
> reasonable component of a "whole nine yards" approach.

No, but I think it's a bit awkward for users to follow _only_ its own
entry. I forgot to mention that in the patch I currently have, what
autovacuum does is try to get the TOAST table's own pg_autovacuum entry,
and if that fails, get the main rel's entry.

The point here is that if the user disables autovac for the main table,
then it's expected that it is automagically disabled for the toast table
as well, for the usual case where they are disabling it because the
table is too big. Automatically processing the toast table would be
completely unexpected, and most likely unwelcome.

Of course, for the even rarer cases when you want to disable it for the
main rel and enable it for the toast table, you can do that too. (I
can't think of a case where this would be useful though.)

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-advocacy] PostgreSQL derivatives

On Sun, Jun 8, 2008 at 5:21 PM, Josh Berkus <josh@agliodbs.com> wrote:
> I wouldn't classify GridSQL as MPP. It's strictly PP.

Hmm, how did you come to that conclusion? From every definition of
MPP I can find, GridSQL meets the requirements.

GridSQL:
- Acts as a single, large-scale system by means of shared-nothing
clustering (as does Greenplum IIRC)
- Partitions and executes independent units-of-work in parallel across
multiple independent microprocessors
- Is capable of scaling to hundreds (if not thousands) of nodes

What definition are you using, because I can't seem to find it in my top 5?

--
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation | fax: 732.331.1301
499 Thornall Street, 2nd Floor | jonah.harris@enterprisedb.com
Edison, NJ 08837 | http://www.enterprisedb.com/

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [HACKERS] pg_dump restore time and Foreign Keys

Robert Treat <xzilla@users.sourceforge.net> writes:
> and i'm sure no one is against that idea, but you're never going to be able to
> match the performance of just avoiding the check.

We'll never be able to match the performance of not having transactions,
either, but the community has never for a moment considered having a
"no transactions" mode.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] libpq support for arrays and composites

Andrew Dunstan <andrew@dunslane.net> writes:
> One complicating factor I see is that there is no protocol level support
> for anything other than simple objects - each data value is simply a
> stream of bytes of a known length. We would therefore need some pretty
> robust processing to pick apart structured objects.

Well, it's not that hard, the quoting/escaping rules for array_out and
record_out are well defined.

Are you intending that these operations support both text and binary
results?

The array accessors with ... parameter lists strike me as a bit
dangerous, because there is no way at all to verify that the caller is
passing the expected number of dimensions. Can't that be made tighter?
Also you need to spell out the error handling conventions for each of
these.

I think you missed some "free()" operations.

It might also be useful to provide some functions that form an array or
composite value from per-element strings, ie, the converse of the
de-construction routines. Here I'd be happy to skip the binary case.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] libpq support for arrays and composites

One of the areas where libpq seems to be severely lacking is in handling
arrays and composites in query results. I'd like to set about rectifying
that.

Ideally this would mean that drivers using libpq could easily and
reliably deliver such objects suitably structured in their particular
languages (e.g. lists and hashes in Perl).

One complicating factor I see is that there is no protocol level support
for anything other than simple objects - each data value is simply a
stream of bytes of a known length. We would therefore need some pretty
robust processing to pick apart structured objects.

We'll need a new API to handle such objects. I'm thinking of something like:

PQarray * PQgetArray( const PGresult *res, int row_number, int
column_number);
int PQgetArrayNDims(PQarray * array);
int PQgetArrayLower(PQarray * array, int dim);
int PQgetArrayUpper(PQarray * array, int dim);
int PQgetArrayElementLength(PQarray * array, int dim1, ...);
bool PQgetArrayElementIsNull(PQarray * array, int dim1, ...);
char * PQgetArrayElement(PQarray * array, int dim1, ...);

PQcomposite * PQgetComposite(const PGresult *res, int row_number, int
column_number);
PQcomposite * PQgetArrayElementComposite(PQarray * array, int dim1, ...);
int PQgetCompositeNFields(PQcomposite * composite);
char * PQgetCompositeFName(PQcomposite * composite, int fnumber);
int PQgetCompositeFNumber(PQcomposite * composite, char * fname);
Oid PQgetCOmpositeFType(PQcomposite * composite, int fnumber);
int PQgetCompositeFieldLength(PQcomposite * , int fnumber);
bool PQgetCompositeFieldIsNull(PQcomposite * composite, int fnumber);
char * PQgetCompositeField(PQcomposite * composite, int fnumber);

Not sure if we need analogs for PQfformat, PQfmod or PQfsize - I suspect
not, but right now I'm just thinking out loud.

Thoughts? Is this worth doing?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

On Sunday 08 June 2008 19:07:21 Gregory Stark wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
> > On Fri, 2008-06-06 at 20:19 -0400, Tom Lane wrote:
> >> Robert Treat <xzilla@users.sourceforge.net> writes:
> >>
> >> Actually, the reason it's still 10 is that the effort expended to get it
> >> changed has been *ZERO*. I keep asking for someone to make some
> >> measurements, do some benchmarking, anything to make a plausible case
> >> for a specific higher value as being a reasonable place to set it.
> >>
> >> The silence has been deafening.
> >
> > Not surprising really. It is a simple adjustment to make and it also is
> > easy to spot when its a problem. However it is not trivial to test for
> > (in terms of time and effort). I know 10 is wrong and so do you. If you
> > don't I am curious why I see so many posts from you saying, "Your
> > estimates are off, what is your default_statistics_target?" with yet
> > even more responses saying, "Uhh 10."
>
> Ah, but we only ever hear about the cases where it's wrong of course. In
> other words even if we raised it to some optimal value we would still have
> precisely the same experience of seeing only posts on list about it being
> insufficient.
>

The slipside to this is that we're not trying to find the perfect setting,
we're just trying to determine a number that will cause more benefit than
harm compared to the number we have now. While I am sure there are cases
where 100 is too low as well, I cannot recall ever having seen someone
suggest lowering the default_stats_target to something less than 100. (I
know sit back and wait for someone to comb the archives, just to find that 1
time).

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] pg_dump restore time and Foreign Keys

On Saturday 07 June 2008 16:22:56 Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > Perhaps we need a GUC that says expert_mode = on. ... Another idea
> > might be to make such command options superuser only, to ensure the
> > power is available, yet only in the hands of, by-definition, the trusted
> > few.
>
> This all seems pretty useless, as the sort of user most likely to shoot
> himself in the foot will also always be running as superuser.
>

yeah, i'm not a big fan of "set enable_footgun=true" since the people likely
to get tripped up are going to blindly enable these modes.

otoh, if we do such a thing, i would be a big fan of calling
it "enable_footgun" :-)

> I'd much rather see us expend more effort on speeding up the checks
> than open holes in the system.
>

and i'm sure no one is against that idea, but you're never going to be able to
match the performance of just avoiding the check.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgadmin-hackers] Crash at right mouse click in data grid

Dave Page a écrit :
> On Sat, May 31, 2008 at 12:46 AM, Robins Tharakan <tharakan@gmail.com> wrote:
>
>> In particular, the bug report in the April conversation, said that the crash
>> is due to IsRunning() check that Erwin is pointing to.
>
> Oh yeah, that one :-(. I'm beginning to think we could use a bug
> tracker (and I hate Bugzilla before anyone suggest it!).
>

trac would be a good one : wiki, bug tracker and source browser. Seems a
good deal :)

> I'm still stumped about that now, but I've worked around it by
> checking if the refresh button is enabled as we already do in a few
> other similar places. The data refresh code enables/disables that for
> safety so it should be a reliable indicator of whether or not a thread
> is running.
>


--
Guillaume.

http://www.postgresqlfr.org

http://dalibo.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgadmin-support] pgadmin 1.8.4 Upgrade

On Sun, Jun 8, 2008 at 8:32 PM, Rommel the iCeMAn <icecrew@gmail.com> wrote:
>>>Better yet, wait a few days for
>> PostgreSQL 8.3.3 which will come with pgAdmin 1.8.4 and upgrade the
>> PostgreSQL package.
>
> Oh Geez!!!! Now he tells us!!! LOL!!!! :-)

You didn't notice that pgAdmin upgrades usually come a few days before
PostgreSQL updates? You need to start paying more attention :-p


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [pgadmin-support] pgadmin 1.8.4 Upgrade



On Sun Jun 8, 2008 Dave Page wrote:
 You cannot upgrade the copy of pgAdmin bundled with PostgreSQL in that way (it can only be upgraded using the updated PostgreSQL installer). Just run a regular installation of pgAdmin - it will install in parallel and work just fine. Better yet, wait a few days for PostgreSQL 8.3.3 which will come with pgAdmin 1.8.4 and upgrade the PostgreSQL package.    
Thanks.  I thought this ws a special upgrade pkg.  I installed by double click on MSI pkg so I can check it out.  Can always uninstall when PG 8.3.3 arrives.

    Excellent program!!

                         Thanks again,
                          Dennis

Re: [GENERAL] libpq.so.4

Oliver Kohll wrote:
> I have the same issue as this poster with libpq.so.4:
> http://www.nabble.com/8.3.0-upgrade-td16093803.html
>
> In short, I've upgraded to 8.3.1 from 8.1 on RHEL 4 (with
> some CentOS packages). I have apps with dependencies of
> libpq.so.4 but this is no longer available. 8.3.1 provides
> libpq.so.5 and the compat-libs provide libpq.so.3.

Strange; does anybody know why?

> At the moment maintaining package management integrity is not
> a must - getting back to 100% package-controlled files can
> wait until the machine is replaced. Upgrading the dependent
> software is not an option.
> With that in mind, any suggestions on options?
>
> What I've thought of trying so far is
> 1) creating a symlink called libpq.so.4 towards libpq.so.5 -
> slightly dangerous due to possible API changes?

Don't even think about it.

> 2) extracting a copy of libpq.so.4 from the previous RPM
> release of postgres and manually uploading it

You can do that if all else fails.

> 3) building a custom compat package - I don't know how to do
> this though.

Then that's probably not an option for you...

Yours,
Laurenz Albe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgadmin-support] pgadmin 1.8.4 Upgrade

>>Better yet, wait a few days for
> PostgreSQL 8.3.3 which will come with pgAdmin 1.8.4 and upgrade the
> PostgreSQL package.

Oh Geez!!!! Now he tells us!!! LOL!!!! :-)

Rommel the iCeMAn,
Barbados, Caribbean.

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [PATCHES] Patch for dependency traversal during DROP

I wrote:
> The attached patch rewrites DROP recursion according to my sketch here:
> http://archives.postgresql.org/pgsql-hackers/2008-05/msg00301.php

> I was afraid while writing the patch that it might be too slow due to
> reliance on simple linear list searching in ObjectAddresses lists ---
> that means that deleting N objects is O(N^2),

I did some investigation with gprof. In a test case involving dropping
about 13000 objects (basically, replicating the regression "public"
schema 10 times with a common owner, and then doing DROP OWNED BY),
I found that object_address_present_add_flags() accounted for about 4
seconds of CPU out of a total elapsed runtime of 60 seconds.
Extrapolating, that function would be accounting for a third of the
runtime at 100K objects and 85% of the runtime at 1M objects. So unless
anyone foresees people routinely dropping millions of objects at a time,
it seems we don't need to bother improving the ObjectAddresses search
algorithm.

> Another possible objection to this patch is that it takes an awful lot
> of locks on things that we never locked before; a large DROP operation
> could easily run out of locktable shared memory when it did not before.
> That's fixable by increasing max_locks_per_transaction, but I wonder
> whether there will be any push-back about it.

This, on the other hand, might be a real problem --- I had to double the
default value of max_locks_per_transaction to run the 13K-objects
example.

I'm toying with the idea of not taking a deletion lock when traversing
an INTERNAL dependency, on the grounds that no one could be deleting
the dependent object anyway unless they have lock on its owner object ---
which we already have. This would make for a noticeable reduction in
the number of new locks taken as a result of this patch; for instance
we'd not bother locking the rowtype of a relation when dropping the
relation. Can anyone think of a case where this would go wrong?

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [pgadmin-hackers] Crash at right mouse click in data grid

On Sat, May 31, 2008 at 12:46 AM, Robins Tharakan <tharakan@gmail.com> wrote:

> In particular, the bug report in the April conversation, said that the crash
> is due to IsRunning() check that Erwin is pointing to.

Oh yeah, that one :-(. I'm beginning to think we could use a bug
tracker (and I hate Bugzilla before anyone suggest it!).

I'm still stumped about that now, but I've worked around it by
checking if the refresh button is enabled as we already do in a few
other similar places. The data refresh code enables/disables that for
safety so it should be a reliable indicator of whether or not a thread
is running.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] SVN Commit by dpage: r7363 - trunk/pgadmin3/pgadmin/frm

Author: dpage

Date: 2008-06-08 19:37:47 +0100 (Sun, 08 Jun 2008)

New Revision: 7363

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7363&view=rev

Log:
Fix (ok, implement a hack to prevent) a potential crash when checking if the data thread is running in the edit grid before displaying the sort/filter context menu.


Modified:
trunk/pgadmin3/pgadmin/frm/frmEditGrid.cpp

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgsql-es-ayuda] generatedKeys, jdbc: no logro obtener llaves generadas

redirigiendo a la lista

2008/6/8 Roberto Rodríguez Pino <rodpin@gmail.com>:
>
> El dom, 08-06-2008 a las 12:57 -0500, Jaime Casanova escribió:
>
>> Mmm... tienes razon... aunque no estoy seguro que te hayas percatado
>> que la funcion que declare no es plpgsql sino sql, mira la clausula
>> LANGUAGE...
>>
>> las funciones sql no necesitan un return sino que regresen el valor
>> que retorne el ultimo select, crei que haria lo mismo con la clausula
>> RETURNING de un insert/update/delete pero no es asi... la funcion
>> podrias hacerla asi
>>
>> CREATE OR REPLACE function ingresarVoucherEntrada( _idEmpleado "int4",
>> _idProveedor "int4", _fechaEmision"date", _documentoAsociado"int4")
>> returns int as $$
>>
>> insert into voucherEntrada (idEmpleado, idProveedor, fechaEmision,
>> docuementoasociado)
>> values (_idEmpleado, _idProveedor, _fechaEmision,_documentoAsociado);
>>
>> select currval('nombre de la secuencia que usa el serial')::int;
>>
>> $$ LANGUAGE 'sql' VOLATILE;
>>
>>
> He probado lo siguiente:
> CREATE OR REPLACE function ingresarVoucherEntrada( _idEmpleado "int4",
> _idProveedor "int4", _fechaEmision"date", _documentoAsociado"int4")
> returns int as $$
> insert into voucherEntrada (idEmpleado, idProveedor,
> fechaEmision, docuementoasociado)
> values ($1, $2, $3,$4);
>
> select currval('idvoucherentrada')::int;
> $$ LANGUAGE 'sql' VOLATILE;
>
> y obtengo un
> ERROR: no existe la relación «idvoucherentrada»
> CONTEXT: función SQL «ingresarvoucherentrada»
> Al tratar de ingresar la funcion.
>

http://www.postgresql.org/docs/8.3/static/sql-createsequence.html
http://www.postgresql.org/docs/8.3/static/datatype-numeric.html#DATATYPE-SERIAL

Las secuencias son objetos que generan numeros secuenciales, es lo que
usa el tipo serial para generar el siguiente numero

en la funcion currval debes usar el nombre de la secuencia no el
nombre del campo... si no le has cambiado el nombre a la secuencia y
puesto que no sabias lo que era probablemente no lo has hecho,
entonces el nombre de la secuencia que afecta a ese campo serial debe
ser nombretabla_nombrecolumna_seq

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?

http://archives.postgresql.org/pgsql-es-ayuda

Re: [pgadmin-support] pgadmin 1.8.4 Upgrade

On Sat, Jun 7, 2008 at 7:02 PM, D Galen <degalen@comcast.net> wrote:
> I currently have ver. 1.8.2, which was installed (Windows 2000 Prof) from
> the postgres package. I downloaded the 1.8.4 upgrade. When I run the
> upgrade.bat file, the MSI install screen comes up but then exits without
> installing. I don't show PgAdmin in add remove programs, (because it was
> installed from postgres pkg?). Do I need to move the download to a
> particular location?

You cannot upgrade the copy of pgAdmin bundled with PostgreSQL in that
way (it can only be upgraded using the updated PostgreSQL installer).
Just run a regular installation of pgAdmin - it will install in
parallel and work just fine. Better yet, wait a few days for
PostgreSQL 8.3.3 which will come with pgAdmin 1.8.4 and upgrade the
PostgreSQL package.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

[HACKERS] Proposal - improve eqsel estimates by including histogram bucket numdistinct statistics

Currently eqsel assumes that, except for the values stored as mcv's,
the number of times that a value appears in a table column is
independent of it's value. Unfortunately, this often yields
inappropriate selectivity estimates and frequently leads to
inappropriate plans.

As an example, consider an insurance company that keeps a record of
patient heights. Assume there are a 1000000 patient heights in this
column, and they are distributed normally with a mean of 1.7526 and a
standard deviation of 0.0762. Furthermore, assume that the heights are
only measured to the nearest centimeter. Then, we'd expect there to be
about 73 distinct heights, with a SD of 1.5.

Ignoring the effects of MCV's, the planner expects
SELECT height FROM heights WHERE height = 1.75;
to yield roughly 13000 results. However, given that we know the
underlying distribution, we would expect to see ~52000 results.

Similarly, the planner expects to see 13000 results from
SELECT 1.75 FROM heights WHERE height = 2.05;
While we expect to see 2.7.

Obviously this example is not totally convincing: if I were to post
this to pg-general looking for advice I'm sure that everyone would
tell me to just increase the size of my mcv stats. However, in cases
where the number of distinct values is higher, this isn't always
feasible. Also, why store a list of 50 values and their frequencies
when 10 extra would provide the same plans without bloating
pg_statistics?

To combat this problem, I have two different proposals.

Idea 1: Keep an array of stadistinct that correspond to each bucket size.

In the example above, ( again ignoring mcv's ) the quantile data is

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
1.38 1.66 1.69 1.71 1.73 1.75 1.77 1.79 1.82 1.85 2.12

with numdistinct values of ( respectively )

29 2 2 2 2 2 2 3 3 25

For the two above examples, this new approach would yield selectivity
estimates of

(1000000/10)/2 = 50000 ( vs an actual ED of ~52000 )
and
(1000000/10)/25 = 4000 ( vs an actual ED of ~2.7 )

Furthermore, this is done without mcvs. Since mcv's would make the
histogram more sensitive to the edges, the estimates with mcv's should
be correspondingly better.


There are two potential problems that I see with this approach:

1) It assumes the = is equivalent to <= and >= . This is certainly
true for real numbers, but is it true for every equality relation that
eqsel predicts for?

2) It bloats the stats table.

Idea 2: Keep a correlation statistic between ndistinct and bucket size

This addresses problem #2.

In lieu of keeping an actual list of ndistinct per histogram bucket,
we store the linear scaling coefficient between histogram bucket width
and ndistinct/(avg ndistinct). To visualize this, it is easiest to
consider plotting the bucket width versus ndistinct. The scaling
coefficient is the linear line that passes through origin and
minimizes the square of the difference between it's estimate for
ndistinct and the actual value.

When I apply this method to the above data I find a coefficient of
13.63 for an average ndist of 72/10. This provides selectivity
estimates, for the above two examples, of
(1000000/10)/( 13.63*7.2*(1.77 - 1.75) ) = 50950 ( vs an actual ED of ~52000 )
and
(1000000/10)/( 13.63*7.2*(2.12 - 1.85) ) = 3774 ( vs an actual ED of ~2.7 )

Although this yields better results than idea 1 for this particular
example, it will be much more sensitive to weird distributions.

Obviously there are some special cases to consider: we wouldn't want
the stats to be skewed such that they provide really bad plans.
However, with some carefully designed caps I believe that we could
ensure that the estimates are at least as good as they are now. In
fact, I'm not certain that an R^2 penalty is the correct loss
function. Ideally, we want to minimize the extra time that the db
spends by choosing an incorrect plan. Maybe slight overestimations are
better than slight underestimations? Maybe the cost of the occasional
(really) bad plan is less than the cost of a bunch of kinda bad plans?

Finally, we aren't limited to just one coefficient. We could also
store multiple coefficents to improve our estimates, and provide a
compromise between ideas 1 and 2.

Food for future thought...

I addition to the previous benefits, I think that this method has the
potential to make the process by which MCV are chosen (or not chosen)
smarter. Now the planner chooses a value to be an mcv candidate if
it's frequency is greater than 1.25 * the average frequency. Given
that this improved selectivity estimate is implemented, maybe a better
way would be to include a value as an mcv if it's a) above a certain
threshold and b) the histogram selectivity estimator does do a poor
job.

What are peoples thoughts on idea 1 vs idea 2?

Am I missing any relevant details about the planner's operation?

Do people think that the improved estimates would be worth the
additional overhead?

-Nathan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] temporarily deactivate an index

Jaime Casanova escribió:

> if i mark the index not ready (using pg_index.indisvalid or
> pg_index.indisready depending on version), will the index be updated
> if in another transaction i make an insert?

Yes (in fact that's the whole point of having an index marked "not
valid/ready").

--
Alvaro Herrera

http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgadmin-hackers] Missing libraries and path

On Sat, Jun 7, 2008 at 5:23 PM, Mickael Deloison <mdeloison@gmail.com> wrote:
> Hi pgAdmin hackers,
>
> I downloaded a fresh copy of pgAdmin code from the SVN repository
> today. In Visual C++ Express 2005, I had to do a few things to make
> pgAdmin compile well.
>
> 1) Libraries to add:
> advapi32.lib
> user32.lib
> gdi32.lib
> comdlg32.lib
> shell32.lib
> ole32.lib
> oleaut32.lib

That sounds like you didn't fully configure VC++ to work with the platform SDK.

> 2) Link editor > General > Supplementary path:
> $(PGDIR)/lib/ms
> My PGDIR is D:\Programs\PostgreSQL\8.1
>
> Maybe pgAdmin3.vcproj needs to be updated?

No, your copy of PostgreSQL :-). The path is correct for 8.3/8.4

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgsql-es-ayuda] Comportamiento work_mem (windows)

Raúl Andrés Duque Murillo escribió:

> 7. Ejecuto de nuevo la consulta, y efectivamente se ha eliminado el
> uso de los archivos temporales de aprox. 1.5MB, pero sigue usándose el
> de 35MB. Vamos bien !!!
> 8. Subo el work_mem = 50MB
> 9. Ejecuto la consulta MULTIPLES VECES y SIGUE utilizándose el archivo
> temporal de 35MB pero creció el tiempo en 2s !!!
> 10. Sé que los 35MB se utilizan para el order by por que si lo quito ya no se usa ese archivo temporal.

No tengo muy claro esta parte del codigo, pero se me ocurre que quizas
lo que pasa es que la estimacion es que va a usar mucha mas memoria para
hacer el sort completo, y por eso decide usar el disco desde el
principio. Fijate en el EXPLAIN ANALYZE si la estimacion total de
tuplas esta muy desviada de la cantidad verdadera de tuplas que se
generan en ese sort. Quizas intenta poner un work_mem mucho mas grande
(100 o 500 MB) a ver si en algun momento se decide a no usar el disco.

> 11. Vuelvo a work_mem = 5MB y vuelvo a optener los valores de 2s por debajo.

Esta parte no me queda nada clara. Supongo que hay una diferencia en el
plan pero no mostraste los planes.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

[COMMITTERS] pgsnap - pgsnap: Add a report to get index whose size is greater than

Log Message:
-----------
Add a report to get index whose size is greater than the associated
table.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.55 -> r1.56)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.55&r2=1.56)
TODO (r1.24 -> r1.25)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/TODO.diff?r1=1.24&r2=1.25)
pgsnap/pgsnap/lib:
navigate.php (r1.20 -> r1.21)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/navigate.php.diff?r1=1.20&r2=1.21)
reports.php (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/reports.php.diff?r1=1.1&r2=1.2)

Added Files:
-----------
pgsnap/pgsnap/lib:
indexesbiggerthantables.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/indexesbiggerthantables.php?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsnap - pgsnap: Bugfix: typo in a variable name.

Log Message:
-----------
Bugfix: typo in a variable name.

Modified Files:
--------------
pgsnap/pgsnap/lib:
activities.php (r1.7 -> r1.8)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/activities.php.diff?r1=1.7&r2=1.8)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] temporarily deactivate an index

On Sun, Jun 8, 2008 at 11:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Viktor Rosenfeld <rosenfel@informatik.hu-berlin.de> writes:
>> That works, but I'm still looking for another way to deactivate the
>> index. The reason being, that my query load is randomly generated by
>> a Java program and I don't want to go and change the SQL compiler.
>
> Well, you're going to have to change *something* on the client side,
> assuming you want this behavioral change to affect only some queries.
>
> Otherwise, in versions that have CREATE INDEX CONCURRENTLY you could
> mark the index not ready to use yet (see pg_index.indisvalid or
> pg_index.indisready depending on version).
>

if i mark the index not ready (using pg_index.indisvalid or
pg_index.indisready depending on version), will the index be updated
if in another transaction i make an insert?

--
regards,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsnap - pgsnap: Add a --all option to get a report for each database

Log Message:
-----------
Add a --all option to get a report for each database available on the
selected PostgreSQL server.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.54 -> r1.55)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.54&r2=1.55)
TODO (r1.23 -> r1.24)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/TODO.diff?r1=1.23&r2=1.24)
pgsnap.php (r1.28 -> r1.29)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/pgsnap.php.diff?r1=1.28&r2=1.29)
pgsnap/pgsnap/lib:
getopt.php (r1.16 -> r1.17)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/getopt.php.diff?r1=1.16&r2=1.17)
pgsnap/pgsnap/man:
pgsnap.1 (r1.7 -> r1.8)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/man/pgsnap.1.diff?r1=1.7&r2=1.8)

Added Files:
-----------
pgsnap/pgsnap/lib:
reports.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/reports.php?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] libpq.so.4

Oliver Kohll <oliver@gtwm.co.uk> writes:
> What I've thought of trying so far is
> 1) creating a symlink called libpq.so.4 towards libpq.so.5 - slightly
> dangerous due to possible API changes?

Worth trying. According to the CVS logs
http://archives.postgresql.org/pgsql-committers/2006-04/msg00341.php
the ABI break from .4 to .5 was simply to stop exporting symbols that
weren't officially part of the API. So a symlink would work for
applications that played by the rules, and if you have any that
didn't the failure will be pretty obvious.

> 2) extracting a copy of libpq.so.4 from the previous RPM release of
> postgres and manually uploading it

That would work too.

> 3) building a custom compat package - I don't know how to do this
> though.

If you got the 8.3 package from someplace they should have an 8.1
compat package too.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsnap - pgsnap: "Cache hit ratio" report is not interesting if

Log Message:
-----------
"Cache hit ratio" report is not interesting if stats_block_level (or
track_counts in 8.3) is disabled.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.53 -> r1.54)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.53&r2=1.54)
TODO (r1.22 -> r1.23)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/TODO.diff?r1=1.22&r2=1.23)
pgsnap.php (r1.27 -> r1.28)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/pgsnap.php.diff?r1=1.27&r2=1.28)
pgsnap/pgsnap/template:
header.template.html (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/template/header.template.html.diff?r1=1.6&r2=1.7)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] temporarily deactivate an index

Viktor Rosenfeld <rosenfel@informatik.hu-berlin.de> writes:
> That works, but I'm still looking for another way to deactivate the
> index. The reason being, that my query load is randomly generated by
> a Java program and I don't want to go and change the SQL compiler.

Well, you're going to have to change *something* on the client side,
assuming you want this behavioral change to affect only some queries.

Otherwise, in versions that have CREATE INDEX CONCURRENTLY you could
mark the index not ready to use yet (see pg_index.indisvalid or
pg_index.indisready depending on version).

Another line of attack, which I think only works in 8.3 and up,
is to make a planner plugin that disables specific indexes from
being considered. In fact I think I'd made a toy one of those
last year [ rummages... ] Yeah, here it is. Code attached.
No documentation, but basically you'd build it using pgxs and
then do
LOAD '$libdir/planignoreindex';
SET ignore_index = name-of-index;
Again this would be difficult to do without any client changes.

regards, tom lane

Re: [GENERAL] PL/pgSQL graph enumeration function hangs

Charles F. Munat wrote:
> Thanks, but the join clause is there, it's just buried in the subqueries.
>
> If there is a problem, it is probably that the loop never ends.
>
> Or it could be that the answer is exponential, and I just have too many
> rows in the source table and too deep a graph.
>
> I figured out how to do it in the application with one call to the
> database and a simple recursive method in a class, though, so I'm not
> going to use a stored function in the DB.

If you have figured it out this way you can even use one of the
PL/languages to implement it within the database :-)

Cheers
Tino

Re: [PATCHES] [HACKERS] TRUNCATE TABLE with IDENTITY

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Tom Lane wrote:
>> 2. I had first dismissed Neil's idea of transactional sequence updates
>> as impossible, but on second look it could be done. Suppose RESTART
>> IDENTITY does this for each sequence;
>>
>> * obtain AccessExclusiveLock;
>> * assign a new relfilenode;
>> * insert a sequence row with all parameters copied except
>> last_value copies start_value;
>> * hold AccessExclusiveLock till commit.

> Hmm, this kills the idea of moving sequence data to a single
> non-transactional catalog :-(

Well, there are a number of holes in our ideas of how to do that anyway.
But offhand I don't see why we couldn't distinguish regular heap_update
from update_in_place on single rows within a catalog.

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [GENERAL] temporarily deactivate an index

On Sun, Jun 8, 2008 at 7:55 AM, Jaime Casanova <systemguards@gmail.com> wrote:
> On Sun, Jun 8, 2008 at 1:34 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>> On Sat, Jun 7, 2008 at 5:16 PM, Viktor Rosenfeld
>> <rosenfel@informatik.hu-berlin.de> wrote:
>>>>
>>>> Try this:
>>>>
>>>> begin;
>>>> drop indexname;
>>>> explain analyze select ...;
>>>> rollback;
>>>
>>> That works, but I'm still looking for another way to deactivate the index.
>>> The reason being, that my query load is randomly generated by a Java
>>> program and I don't want to go and change the SQL compiler.
>>
>> Sorry, I'm out of ideas. I mean, you can turn off all indexes with
>> set enable_indexscan=off but there's no other way to do it cheaply and
>> in such a fine grained way.
>>
>
> with "enable_indexscan=off" you're not turning indexes off but putting
> a high cost in using them... the effect, most of the time, is that
> indexes will not be used but you can't be sure...
>
> the better solution was the first one: dropping the index inside a
> transaction, execute the explain analyze of the query and rollback the
> transaction

True. Also there's been some talk lately about how to add query
planner controls to postgresql to control the planner. This being an
open source project, I think it's pretty reasonable to put the onus of
that kind of thing on the user community to take a first whack at
before someone spends a lot of time instrumenting it in full in the
core.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsnap - pgsnap: Add Stat IO reports.

Log Message:
-----------
Add Stat IO reports.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.52 -> r1.53)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.52&r2=1.53)
TODO (r1.21 -> r1.22)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/TODO.diff?r1=1.21&r2=1.22)
pgsnap.php (r1.26 -> r1.27)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/pgsnap.php.diff?r1=1.26&r2=1.27)
pgsnap/pgsnap/lib:
navigate.php (r1.19 -> r1.20)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/navigate.php.diff?r1=1.19&r2=1.20)

Added Files:
-----------
pgsnap/pgsnap/lib:
statio_indexes.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/statio_indexes.php?rev=1.1&content-type=text/x-cvsweb-markup)
statio_sequences.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/statio_sequences.php?rev=1.1&content-type=text/x-cvsweb-markup)
statio_tables.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/statio_tables.php?rev=1.1&content-type=text/x-cvsweb-markup)

Removed Files:
-------------
pgsnap/pgsnap/lib:
stat_sequences.php
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/stat_sequences.php)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsnap - pgsnap: Add "Relations Bloat" Report.

Log Message:
-----------
Add "Relations Bloat" Report.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.51 -> r1.52)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.51&r2=1.52)
pgsnap.php (r1.25 -> r1.26)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/pgsnap.php.diff?r1=1.25&r2=1.26)
pgsnap/pgsnap/lib:
navigate.php (r1.18 -> r1.19)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/navigate.php.diff?r1=1.18&r2=1.19)

Added Files:
-----------
pgsnap/pgsnap/lib:
relationsbloat.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/relationsbloat.php?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[BUGS] BUG #4230: No mapping between account names and security IDs was done

The following bug has been logged online:

Bug reference: 4230
Logged by: Gurgen
Email address: gerble828@yahoo.com
PostgreSQL version: 8.3
Operating system: Vista
Description: No mapping between account names and security IDs was
done
Details:

No mapping between account names and security IDs was done

That is the message i get when i try to setup a account, and install
Postgresql

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[COMMITTERS] pgsnap - pgsnap: Add "Non Idle Processes" Report.

Log Message:
-----------
Add "Non Idle Processes" Report.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.50 -> r1.51)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.50&r2=1.51)
pgsnap.php (r1.24 -> r1.25)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/pgsnap.php.diff?r1=1.24&r2=1.25)
pgsnap/pgsnap/lib:
navigate.php (r1.17 -> r1.18)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/navigate.php.diff?r1=1.17&r2=1.18)

Added Files:
-----------
pgsnap/pgsnap/lib:
nonidleprocesses.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/nonidleprocesses.php?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-es-ayuda] Procedure para actualizar secuencias

El día 8 de junio de 2008 1:41, Raúl Andrés Duque Murillo
<ra_duque@yahoo.com.mx> escribió:
> Cordial Saludo.
>
> Estoy pensando en hacer un procedure que actualice TODAS las secuencias a
> los max(id_table) de las tablas correspondientes (para evitarme hacer
> setvals de cada secuencia), lógicamente este procedimiento sólo lo

pero no se supone esto siempre es así, a menos que asignes al campo un
valor arbitrario y no un default ???

> utilizaría en el momento de inicializar la DB para una nueva instalación ya
> que algunas tablas tienen algunos datos "quemados" y estando fuera de
> producción. Cómo creo que es una necesidad que pudieron tener algunos
> compañeros en el pasado, de pronto ya alguien hizo este procedure y quisiera
> compartirlo.
>
> Agradezco su colaboración.
>
> Atentamente,
>
> RAUL DUQUE
> Bogotá, Colombiaq
--
TIP 1: para suscribirte y desuscribirte, visita http://archives.postgresql.org/pgsql-es-ayuda

Re: [GENERAL] temporarily deactivate an index

On Sun, Jun 8, 2008 at 1:34 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Sat, Jun 7, 2008 at 5:16 PM, Viktor Rosenfeld
> <rosenfel@informatik.hu-berlin.de> wrote:
>>>
>>> Try this:
>>>
>>> begin;
>>> drop indexname;
>>> explain analyze select ...;
>>> rollback;
>>
>> That works, but I'm still looking for another way to deactivate the index.
>> The reason being, that my query load is randomly generated by a Java
>> program and I don't want to go and change the SQL compiler.
>
> Sorry, I'm out of ideas. I mean, you can turn off all indexes with
> set enable_indexscan=off but there's no other way to do it cheaply and
> in such a fine grained way.
>

with "enable_indexscan=off" you're not turning indexes off but putting
a high cost in using them... the effect, most of the time, is that
indexes will not be used but you can't be sure...

the better solution was the first one: dropping the index inside a
transaction, execute the explain analyze of the query and rollback the
transaction

--
regards,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsnap - pgsnap: * Display database size with a graph.

Log Message:
-----------
* Display database size with a graph.
* Display tablespace size with a graph.
* Display table size with a graph.
It relies on Open Flash Chart PHP library.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.49 -> r1.50)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.49&r2=1.50)
INSTALL (r1.2 -> r1.3)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/INSTALL.diff?r1=1.2&r2=1.3)
TODO (r1.20 -> r1.21)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/TODO.diff?r1=1.20&r2=1.21)
pgsnap.php (r1.23 -> r1.24)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/pgsnap.php.diff?r1=1.23&r2=1.24)
pgsnap/pgsnap/lib:
getmodules.php (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/getmodules.php.diff?r1=1.5&r2=1.6)
navigate.php (r1.16 -> r1.17)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/navigate.php.diff?r1=1.16&r2=1.17)

Added Files:
-----------
pgsnap/pgsnap/lib:
graph_dbsize.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/graph_dbsize.php?rev=1.1&content-type=text/x-cvsweb-markup)
graph_tablesize.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/graph_tablesize.php?rev=1.1&content-type=text/x-cvsweb-markup)
graph_tblspcsize.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/graph_tblspcsize.php?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-es-ayuda] generatedKeys, jdbc: no logro obtener llaves generadas

2008/6/8 Roberto Rodríguez Pino <rodpin@gmail.com>:
> Estoy haciendo una aplicacion en java y accedo a la base de datos usando
> JDBC, pero necesito ocupar el valor de una llave que fue generada
> automaticamente (serial).
> En java estoy ocupando el metodo getGeneratedKeys(propia de jdbc) que me
> debiera entregar las llaves que se crearon tras las llamada que hago a
> una funcion de postgres.

el metodo getGeneratedKeys() no esta implementada en el jdbc de postgres

[...]
>
> La funcion que tengo en postgres es;
>
> CREATE OR REPLACE function ingresarVoucherEntrada( _idEmpleado "int4",
> _idProveedor "int4", _fechaEmision"date", _documentoAsociado"int4")
> returns void as $$
>
> BEGIN
> insert into voucherEntrada (idEmpleado, idProveedor, fechaEmision,
> docuementoasociado)
> values (_idEmpleado, _idProveedor, _fechaEmision,_documentoAsociado)
> ;
> END
> $$LANGUAGE 'plpgsql' VOLATILE;
>

una solucion podria ser:
CREATE OR REPLACE function ingresarVoucherEntrada( _idEmpleado "int4",
_idProveedor "int4", _fechaEmision"date", _documentoAsociado"int4")
returns int as $$

insert into voucherEntrada (idEmpleado, idProveedor, fechaEmision,
docuementoasociado)
values (_idEmpleado, _idProveedor, _fechaEmision,_documentoAsociado)
returning PonAquiElNombreDeTuCampoSerial ;

$$ LANGUAGE 'sql' VOLATILE;

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?

http://archives.postgresql.org/pgsql-es-ayuda

[COMMITTERS] pgsnap - pgsnap: Display databases' name in red if blks_read>=blks_hit.

Log Message:
-----------
Display databases' name in red if blks_read>=blks_hit.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.48 -> r1.49)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.48&r2=1.49)
TODO (r1.19 -> r1.20)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/TODO.diff?r1=1.19&r2=1.20)
pgsnap/pgsnap/lib:
cachehitratio.php (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/cachehitratio.php.diff?r1=1.6&r2=1.7)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[GENERAL] libpq.so.4

Hello,

I have the same issue as this poster with libpq.so.4:

In short, I've upgraded to 8.3.1 from 8.1 on RHEL 4 (with some CentOS packages). I have apps with dependencies of libpq.so.4 but this is no longer available. 8.3.1 provides libpq.so.5 and the compat-libs provide libpq.so.3.

At the moment maintaining package management integrity is not a must - getting back to 100% package-controlled files can wait until the machine is replaced. Upgrading the dependent software is not an option.
With that in mind, any suggestions on options?

What I've thought of trying so far is
1) creating a symlink called libpq.so.4 towards libpq.so.5 - slightly dangerous due to possible API changes?
2) extracting a copy of libpq.so.4 from the previous RPM release of postgres and manually uploading it
3) building a custom compat package - I don't know how to do this though.

but I'm unaware of the pros and cons of these.

Any comments appreciated.

Regards
Oliver


oliver@gtwm.co.uk
 / 0845 456 1810 / 07814 828608
www.gtwm.co.uk - company


NOTE
No contracts may be concluded on behalf of GT webMarque by means of e-mail
communications. The contents of this e-mail are confidential to the
intended recipient at the e-mail address to which it has been addressed;
it may not be disclosed to or used by anyone other than this addressee,
nor may it be copied in any way. If received in error please return to
sender via e-mail.

DISCLAIMER
Please note that neither GT webMarque Ltd nor the sender accept any
responsibility for viruses transmitted via e-mail. It is your
responsibility to scan attachments (if any).




Re: [SQL] Need some magic with alternative rows

On 7 jun 2008, at 05.04, Andreas wrote:
> Is there a way to remove the default-rows if there exists a fk<>0
> row that has the same group_nr ?

Yes, use a correlated sub-query:

SELECT * FROM mytab m1
WHERE fk = 994010 OR (class_nr = 40 AND fk = 0 AND NOT EXISTS (SELECT
1 FROM mytab m2 WHERE m2.group_nr=m1.group_nr AND fk <> 0));


Sincerely,

Niklas Johansson


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[pgadmin-support] Mac OS Leopard + Russian + pgAdmin III 1.8.3 - won't start!

Good day Dave and the community!

Today I've tried to launch new version of pgAdmin III 1.8.4.

and the same problem (I've tried previous versions) It crashes at
startup. This is the output from the crash report, maybe this helps

Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000
Crashed Thread: 0

Thread 0 Crashed:
0 libwx_base_carbonu-2.8.0.dylib 0x010ca3a9
wxStringBase::compare(wchar_t const*) const + 77
1 libwx_base_carbonu-2.8.0.dylib 0x010b3ccf
wxLocale::GetSystemLanguage() + 591
2 libwx_base_carbonu-2.8.0.dylib 0x010b7055 wxLocale::Init(int, int)
+ 523
3 org.postgresql.pgadmin 0x0001507b 0x1000 + 82043
4 org.postgresql.pgadmin 0x003b4e0f 0x1000 + 3882511
5 libwx_base_carbonu-2.8.0.dylib 0x010ade94 wxEntry(int&, wchar_t**)
+ 52
6 org.postgresql.pgadmin 0x00008218 0x1000 + 29208
7 org.postgresql.pgadmin 0x00007b8e 0x1000 + 27534
8 org.postgresql.pgadmin 0x00007ab5 0x1000 + 27317


I'm running Mac OS X 10.5.3 (9D34) with default russian locale.


Is this possible to make pgAdmin running ??


Best regards,
Ilya Dyoshin

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [pgsql-www] Ask for Help on Improving the FreeBSD FAQ

> Most of our documentation building technology was actually inspired and/or
> copied from the FreeBSD project. So I think chances are low you will find
> anything here that you don't already have. The integration with the web site
> and so on are different, but other people have already commented on that.

I see. I would like to ask how many user comments you do receive usually
(per week, per year or per release), whether it is hard to filter them
and maintain the documentation based on them. There is huge question in
our lines regarding this. (``How much does the interactivity cost?'')


Thank you very much,
:g


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

[pgsql-es-ayuda] generatedKeys, jdbc: no logro obtener llaves generadas

Hola a todos!
Es mi primer mensaje a esta lista y obviamente con una pregunta. Espero
no quedar fuera de tema.
Estoy haciendo una aplicacion en java y accedo a la base de datos usando
JDBC, pero necesito ocupar el valor de una llave que fue generada
automaticamente (serial).
En java estoy ocupando el metodo getGeneratedKeys(propia de jdbc) que me
debiera entregar las llaves que se crearon tras las llamada que hago a
una funcion de postgres.
Un extracto del codigo en java es:

String stringConsulta = "select ingresarVoucherEntrada(?,?,?,?)";
CallableStatement consulta = conexion.prepareCall(stringConsulta);
java.sql.Date fechaSql = new Date(fechaemision.getTime());
consulta.setInt(1, idempleado);
consulta.setInt(2, idproveedor);
consulta.setDate(3, fechaSql);
consulta.setInt(4, documentoAsociado);
consulta.executeQuery();

llaveGenerada = consulta.getGeneratedKeys();
if (llaveGenerada.next()){
int key = llaveGenerada.getInt(1);
System.out.println("llaves: "+key);
}
else{
System.err.println("No tengo una llave!.");
}

Y obviamente siempre esta entran al else.

La funcion que tengo en postgres es;

CREATE OR REPLACE function ingresarVoucherEntrada( _idEmpleado "int4",
_idProveedor "int4", _fechaEmision"date", _documentoAsociado"int4")
returns void as $$

BEGIN
insert into voucherEntrada (idEmpleado, idProveedor, fechaEmision,
docuementoasociado)
values (_idEmpleado, _idProveedor, _fechaEmision,_documentoAsociado)
;
END
$$LANGUAGE 'plpgsql' VOLATILE;

Estoy haciendo algo mal con el metodo getGeneratedKeys? Tiene algo malo
la funcion que hice en postgres? Existe otra manera de obtener las
llaves que se generaron automaticamente?


Muchas gracias por los comentarios.
--
visita mi weblog!
http://trasto.hopto.org/weblog
softwarelibre@diinf
http://softwarelibre.diinf.usach.cl

Re: [GENERAL] when to reindex?

On Sat, Jun 7, 2008 at 8:07 PM, Jeremy Harris <jgh@wizmail.org> wrote:
Gregory Stark wrote:
 REINDEX scans the table
precisely once and sorts it.

For the bloat, as opposed to corruption, case -
what information is needed from the table that
is not in the old index?  Why would a sequential
read of the old index alone (then some processing)
not suffice?

In Postgres, an index does not hold the livliness information of the rows it is pointing it; that is, there may be a thousand row-pointers in the index, but not all the rows pointed to by those pointers are known to be live. This is an implication of MVCC in Postgres.

So every index lookup has to look at the corresponding heap (aka table) row and decide if that row should be visible to the querying session.

Best regards,

--
gurjeet[.singh]@EnterpriseDB.com
singh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device