Monday, August 18, 2008

Re: [HACKERS] Improving non-joinable EXISTS subqueries

* Tom Lane (tgl@sss.pgh.pa.us) wrote:
> So ... I'm wondering if this actually touches anyone's hot-button,
> or if we should just file it in the overflowing pile of Things That
> Might Be Nice To Do Someday.

What bugs me the most about having IN() be faster than EXISTS() in
certain situations is that it ends up being counter-intuitive and not
really what you'd expect to happen. That being said, we can always tell
people that they can use IN() as a work-around for these situations. In
the long run, I think it's definitely worth it to spend a bit of extra
time in planning the query for this case. Not knowing what else is on
your plate for 8.4, I don't know where I'd rank this, but it wouldn't be
at the top.

Thanks,

Stephen

[pgsql-es-ayuda] empezar serial desde 1000

hola, que tal,
 
quisiera saber como empezar un serial desde 1000 y no desde 1, en la definicion de la tabla,
 
desde ya muchas gracias por su ayuda,
saludos.

[ADMIN] Problem with reinstall of PostgreSQL 8.3

Hi,
 
I installed postgreSQL 8.3 on windows and changed few settings for ssl. After which I uninstalled it by executing the .msi file again. My intention was to reinstall from scratch, and unfortunately during the reinstall I am getting a error
 
'Invalid username specified: logon failure: unknown user name or bad password.'
 
I am providing valid windows user name and pwd.
 
How can I get past this issue, please help.
 
Thanks.


Be the filmmaker you always wanted to be—learn how to burn a DVD with Windows®. Make your smash hit

[pgsql-fr-generale] [Rappel] Envoyez vos propositions de conférences pour le PGDay !

Bonjour,

Il vous reste deux semaines pour proposer une conférence !


====== Appel à conférences ======


Dans la lignée des évènements PGDay ( "journée PostgreSQL" ) organisée en
Italie, en Grande Bretagne, aux États-Unis , l'association des utilisateurs
francophones de PostgreSQL
( PostgreSQLFr ), en partenariat avec Toulibre, lance une journée dédiée à
PostgreSQL.

Cette journée se tiendra le 4 octobre 2008.

Les conférences se tiendront de 9h à 18h au centre culturel Soupetard à
Toulouse.

PGDay.fr est l'évènement le plus important de l'année pour la communauté
francophone de PostgreSQL. Une série de conférences couvriront les divers
usages du SGBD et de ses logiciels associés.

Si vous êtes un responsable informatique, un développeur ou un
utilisateur avancé de PostgreSQL, cette rencontre est une superbe occasion de
présenter votre travail et vos réflexions.

Nous sommes intéressés par des conférences et des ateliers dans tous les
domaines relatifs à PostgreSQL, notamment :

* La réplication avec Slony (ou autres...)
* Une étude du coût total de possession de PostgreSQL (TCO)
* Cas d'étude : Migration vers PostgreSQL
* La sécurisation de données
* Témoignages d'utilisation
* Les avancées de la future version 8.4
* Log Shipping
* Recherche plein texte
* Outils d'analyse et d'optimisation
* ...

Chaque conférence doit se faire en français et durer 45 minutes.

Envoyez vos propositions de conférences à l'adresse : bureau@postgresqlfr.org
avant le **1er septembre 2008**

Retrouvez cet appel à l'adresse :
http://wiki.postgresql.fr/doku.php/pgday2008:appel_a_conferences

Quelques liens complémentaires :

* Site officiel de l'évènement : http://www.pgday.fr
* L'association PostgreSQLFr : http://www.postgresql.fr
* L'association Toulibre : http://www.toulibre.org

--
damien clochard
http://dalibo.org | http://dalibo.com

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [DOCS] TODO build rule

David Fetter wrote:
> On Sat, Aug 16, 2008 at 03:39:59PM -0400, Bruce Momjian wrote:
> > Peter Eisentraut wrote:
> > > On Saturday 16 August 2008 03:22:09 Bruce Momjian wrote:
> > > > Peter Eisentraut wrote:
> > > > These two lines in pgsql/src/tools/TODO2html should be adding
> > > > "*"s which is then interpreted as "<em>":
> > >
> > > So there are two inconsistent sets of instructions for creating
> > > TODO.html, one in doc/src/FAQ/README and one in
> > > src/tools/TODO2html. Which one is being used?
> > >
> > > I want to convert the real one to a makefile, btw.
> >
> > Oh, I forgot that README talked about making TODO.html. I have
> > updated the README to suggest using pgsql/src/tools/TODO2html.
>
> I thought this was going onto the wiki so Bruce doesn't have to be the
> sole maintainer :)

We have discussed the idea and Magnus did a wiki version for testing. I
am not sure where are in moving that forward. Anyone with CVS access
can modify it now and you can send in patches for it too.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

Re: [PERFORM] Slow query with a lot of data

On Mon, 18 Aug 2008, Moritz Onken wrote:
> I have indexes on result.domain, domain_categories.domain, result.user,
> domain_categories.category. Clustered result on user and domain_categories on
> domain.

> " -> Materialize (cost=2118752.28..2270064.64 rows=12104989 width=8)
> (actual time=46460.599..82336.116 rows=12123161 loops=1)"
> " -> Sort (cost=2118752.28..2149014.75 rows=12104989 width=8)
> (actual time=46460.592..59595.851 rows=12104989 loops=1)"
> " Sort Key: b.domain"
> " Sort Method: external sort Disk: 283992kB"
> " -> Seq Scan on domain_categories b
> (cost=0.00..198151.89 rows=12104989 width=8) (actual time=14.352..22572.869
> rows=12104989 loops=1)"

This is weird, given you say you have clustered domain_categories on
domain. Have you analysed? You should be able to run:

EXPLAIN SELECT * from domain_categories ORDER BY domain

and have it say "Index scan" instead of "Seq Scan followed by disc sort)".

Matthew

--
Patron: "I am looking for a globe of the earth."
Librarian: "We have a table-top model over here."
Patron: "No, that's not good enough. Don't you have a life-size?"
Librarian: (pause) "Yes, but it's in use right now."

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Compatibility types, type aliases, and distinct types

Peter Eisentraut <peter_e@gmx.net> writes:
> So while thinking about how to make this simpler I remembered the "distinct
> type" feature of SQL, which works quite similarly, namely the new type has
> the same structure as the old type, but is a separate entity. It looks like
> CREATE TYPE newtype AS oldtype;
> This feature by itself could be quite useful, and then we could simply add
> something like
> CREATE TYPE newtype AS oldtype WITH CASTS;

This seems like a great way to get lost in "ambiguous function" hell ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-de-allgemein] == Wöchentlicher PostgreSQL Newsletter - 17. August 2008 ==

Der Originalartikel befindet sich unter:
http://people.planetpostgresql.org/dfetter/index.php?/archives/187-PostgreSQL-Weekly-News-August-17-2008.html


== Wöchentlicher PostgreSQL Newsletter - 17. Augus 2008 ==

== PostgreSQL Jobs im August ==

http://archives.postgresql.org/pgsql-jobs/2008-08/threads.php

== PostgreSQL Lokal ==

Die Prato Linux User Group wird PostgreSQL Vorträge im September
halten. Der Zeitplan in italienisch:
http://www.prato.linux.it/serate_a_tema_2008

PGCon Brazil 2008 wird vom 26.-27. September auf dem Unicamp in
Campinas stattfinden.
http://pgcon.postgresql.org.br/index.en.html

PgDay.fr wird am 4. Oktober in Toulouse. Der Call for Papers ist
eröffnet:
http://www.postgresqlfr.org/?q=node/1686
Anmeldung hier:
http://www.pgday.fr/doku.php/inscription


Die PostgreSQL West Konferenz 2008 wird vom 12. bis 12. Oktober in
Portland, State University in Portland, Oregon stattfinden.
http://www.postgresqlconference.org/
Vorträge sind einzureichen unter:
http://www.postgresqlconference.org/west08/talk_submission/

Sponsere den Europäischen PostgreSQL Tag!
http://www.pgday.org/en/sponsors/campaign

Der Call for Papers für den Europäischen PostgreSQL Tag hat begonnen.
http://www.pgday.org/en/call4papers

PGDay.(IT|EU) 2008 wird am 17. und 18. Oktober in Prato stattfinden.
http://www.pgday.org/it/

== PostgreSQL in den News ==

Planet PostgreSQL: http://www.planetpostgresql.org/

Dieser wöchentliche PostgreSQL Newsletter wurde erstellt von David
Fetter.

Sende Neuigkeiten und Ankündigungen bis Sonntag, 15 Uhr Pazifischer
Zeit. Bitte sende englische Beiträge an david@fetter.org, deutsche an
pwn@pgug.de, italienische an pwn@itpug.org.

== Angewandte Patches ==

Heikki Linnakangas committed:

- Introduce the concept of relation forks. An smgr relation can now
consist of multiple forks, and each fork can be created and grown
separately. The bulk of this patch is about changing the smgr API
to include an extra ForkNumber argument in every smgr function.
Also, smgrscheduleunlink and smgrdounlink no longer implicitly call
smgrclose, because other forks might still exist after unlinking
one. The callers of those functions have been modified to call
smgrclose instead. This patch in itself doesn't have any
user-visible effect, but provides the infrastructure needed for
upcoming patches. The additional forks envisioned are a rewritten
FSM implementation that doesn't rely on a fixed-size shared memory
block, and a visibility map to allow skipping portions of a table in
VACUUM that have no dead tuples.

- Relation forks patch requires a catversion bump due to changes in
the format of some WAL records, and two-phase state files, which I
forgot.

- pg_buffercache needs to be taught about relation forks, as Greg
Stark pointed out.

- Fix pull_up_simple_union_all to copy all rtable entries from child
subquery to parent, not only those with RangeTblRefs. We need them
in ExecCheckRTPerms. Report by Brendan O'Shea. Back-patch to 8.2,
where pull_up_simple_union_all was introduced.

Bruce Momjian committed:

- Add to TODO: "Add 'hostgss' pg_hba.conf option to allow GSS
link-level encryption."

- Add URL for TODO: "Improve ability to modify views via ALTER TABLE."

- Fix version warning bug in recently applied adjustments to psql
startup. Gregory Stark

- Add new SQL training web site to FAQ:

- Update Russian FAQ. Viktor Vislobokov.

- In pgsql/src/interfaces/libpq/bcc32.mak, synchronize Borland libpq
makefile to match MSVC. Backpatch to 8.3.X.

- Add to TODO: "Prevent query cancel packets from being replayed by an
attacker, especially when using SSL."

- Add URL for TODO: "Improve ability to modify views via ALTER TABLE."

- Update instructions on generating TODO.html

Peter Eisentraut committed:

- Remove TODO item, "Allow XML to accept more liberal DOCTYPE
specifications." Everything works correctly, per today's email to
-general.

Alvaro Herrera committed:

- Have autovacuum consider processing TOAST tables separately from
their main tables. This requires vacuum() to accept processing a
toast table standalone, so there's a user-visible change in that
it's now possible (for a superuser) to execute "VACUUM
pg_toast.pg_toast_XXX".

Tom Lane committed:

- Implement SEMI and ANTI joins in the planner and executor.
(Semijoins replace the old JOIN_IN code, but antijoins are new
functionality.) Teach the planner to convert appropriate EXISTS and
NOT EXISTS subqueries into semi and anti joins respectively. Also,
LEFT JOINs with suitable upper-level IS NULL filters are recognized
as being anti joins. Unify the InClauseInfo and OuterJoinInfo
infrastructure into "SpecialJoinInfo". With that change, it becomes
possible to associate a SpecialJoinInfo with every join attempt,
which permits some cleanup of join selectivity estimation. That
needs to be taken much further than this patch does, but the next
step is to change the API for oprjoin selectivity functions, which
seems like material for a separate patch. So for the moment the
output size estimates for semi and especially anti joins are quite
bogus.

- Performance fix for new anti-join code in nodeMergejoin.c: after
finding a match in antijoin mode, we should advance to next outer
tuple not next inner. We know we don't want to return this outer
tuple, and there is no point in advancing over matching inner tuples
now, because we'd just have to do it again if the next outer tuple
has the same merge key. This makes a noticeable difference if there
are lots of duplicate keys in both inputs. Similarly, after finding
a match in semijoin mode, arrange to advance to the next outer tuple
after returning the current match; or immediately, if it fails the
extra quals. The rationale is the same. (This is a performance bug
in existing releases; perhaps worth back-patching? The planner
tries to avoid using mergejoin with lots of duplicates, so it may
not be a big issue in practice.) Nestloop and hash got this right to
start with, but I made some cosmetic adjustments there to make the
corresponding bits of logic look more similar.

- Clean up the loose ends in selectivity estimation left by my patch
for semi and anti joins. To do this, pass the SpecialJoinInfo
struct for the current join as an additional optional argument to
operator join selectivity estimation functions. This allows the
estimator to tell not only what kind of join is being formed, but
which variable is on which side of the join; a requirement long
recognized but not dealt with till now. This also leaves the door
open for future improvements in the estimators, such as accounting
for the null-insertion effects of lower outer joins. I didn't do
anything about that in the current patch but the information is in
principle deducible from what's passed. The patch also clarifies
the definition of join selectivity for semi/anti joins: it's the
fraction of the left input that has (at least one) match in the
right input. This allows getting rid of some very fuzzy thinking
that I had committed in the original 7.4-era IN-optimization patch.
There's probably room to estimate this better than the present patch
does, but at least we know what to estimate. Since I had to touch
CREATE OPERATOR anyway to allow a variant signature for join
estimator functions, I took the opportunity to add a couple of
additional checks that were missing, per my recent message to
-hackers:
* Check that estimator functions return float8; Require execute
* permission at the time of CREATE OPERATOR on the
operator's function as well as the estimator functions;
* Require ownership of any pre-existing operator that's modified by
the command. I also moved the lookup of the functions out of
OperatorCreate() and into operatorcmds.c, since that seemed more
consistent with most of the other catalog object creation processes,
eg CREATE TYPE.

- Fix a couple of places where psql might fail to report a suitable
error if PQexec returns NULL. These don't seem significant enough
to be worth back-patching, but they ought to get fixed ...

- In pgsql/src/bin/pg_dump/pg_backup_db.c, fix pg_dump/pg_restore's
ExecuteSqlCommand() to behave suitably if PQexec returns NULL
instead of a PGresult. The former coding would fail, which is OK,
but it neglected to give you the PQerrorMessage that might tell you
why. In the oldest branches, there was another problem: it'd
sometimes report PQerrorMessage from the wrong connection.

- Improve sublink pullup code to handle ANY/EXISTS sublinks that are
at top level of a JOIN/ON clause, not only at top level of WHERE.
(However, we can't do this in an outer join's ON clause, unless the
ANY/EXISTS refers only to the nullable side of the outer join, so
that it can effectively be pushed down into the nullable side.) Per
request from Kevin Grittner. In passing, fix a bug in the initial
implementation of EXISTS pullup: it would Assert if the EXIST's
WHERE clause used a join alias variable. Since we haven't yet
flattened join aliases when this transformation happens, it's
necessary to include join relids in the computed set of RHS relids.

- In pgsql/src/backend/optimizer/plan/subselect.c, remove prohibition
against SubLinks in the WHERE clause of an EXISTS subquery that
we're considering pulling up. I hadn't wanted to think through
whether that could work during the first pass at this stuff.
However, on closer inspection it seems to be safe enough.

- In pgsql/src/backend/optimizer/path/joinrels.c, add some defenses
against constant-FALSE outer join conditions. Since
eval_const_expressions will generally throw away anything that's
ANDed with constant FALSE, what we're left with given an example
like select * from tenk1 a where (unique1,0) in (select unique2,1
from tenk1 b); is a cartesian product computation, which is really
not acceptable. This is a regression in CVS HEAD compared to
previous releases, which were able to notice the impossible join
condition in this case --- though not in some related cases that are
also improved by this patch, such as select * from tenk1 a left join
tenk1 b on (a.unique1=b.unique2 and 0=1); Fix by skipping evaluation
of the appropriate side of the outer join in cases where it's
demonstrably unnecessary.

Magnus Hagander committed:

- Make the temporary directory for pgstat files configurable by the
GUC variable stats_temp_directory, instead of requiring the admin to
mount/symlink the pg_stat_tmp directory manually. For now the
config variable is PGC_POSTMASTER. Room for further improvment that
would allow it to be changed on-the-fly. Original idea by Euler
Taveira de Oliveira.

- In pgsql/src/tools/msvc/Solution.pm, probes.h is generated from
probes.d, not pg_trace.d.

== Abgelehnte Patches (bis jetzt) ==

No one was disappointed this week :-)

== Eingesandte Patches ==

Zdenek Kotala sent in a patch to introduce a page layout footprint in
aid of his in-place upgrades work.

David Wheeler sent in another patch to clean up the citext contrib
module.

Zdenek Kotala sent in a WIP patch for his new page API in aid of his
in-place upgrades work.

ITAGAKI Takahiro sent in another revision of his "Copy storage
parameters" patch.

Gregory Stark sent in a patch to fix a bug in psql where old variables
were being used for the new database connection after \c.

ITAGAKI Takahiro sent in a patch to add duration option (-T) to
pgbench instead of number of transactions (-t). -t and -T are mutually
exclusive.

Jan Urbanski sent in two more revisions of his oprrest patch for text
search.

Dmitri Koterov sent in a patch to add three new functions to
contrib/intarray: int_array_append_aggregate(int[]), which quickly
merges arrays, _int_group_count_sort(int[], bool), a frequency-based
sort, and bidx(int[], int), a binary search in a sorted array.

Martin Pihlak sent in a patch that implements plan invalidation on
function DROP, REPLACE and ALTER.

Ryan Bradetich sent in a patch to implement unsigned integer types.

Magnus Hagander sent in a patch to do pg_hba.conf and postgresql.conf
parsing in the postmaster and matching in the backend. Per
discussion.

--
Andreas 'ads' Scherbaum
Deutsche PostgreSQL User Group: http://www.pgug.de
DPWN: http://ads.wars-nicht.de/blog/categories/18-PWN

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [HACKERS] Postgres-R

Thanks for the information.
For Step5 (starting ensemble daemon).-
I set the multicast address to both nodes (Node 1& Node 2 eth0: 224.0.0.9/4) before starting the ensemble. And started the server application mtalk in node 1 & node 2 and then client application in node 1 & node 2. But the count of members ('nmembers') show as 1. This is the output of the client program 'c_mtalk'. Seeing this, I'am assuming that the applications are not merged.
Could you please let me know how did you proceed with the setup of ensemble?

regards,
Niranjan

-----Original Message-----
From: ext leiyonghua [mailto:leiyonghua@alibaba-inc.com]
Sent: Monday, August 18, 2008 2:58 PM
To: K, Niranjan (NSN - IN/Bangalore); Markus Wanner; pgsql-hackers@postgresql.org
Subject: Re: Postgres-R

niranjan.k@nsn.com 写道:
> I wish to set up the Postgres-R environment, could you please let me know the steps for setting it up.
> Thanks.
>
>
>
yeah, actually, i have not been successful to set up this, but let me give some information for you.
1. download the postgresql snapshot source code from here:
http://www.postgresql.org/ftp/snapshot/dev/
(this is a daily tarball)

2. Get the corresponding patch for postgres-r from:
http://www.postgres-r.org/downloads/

3. apply the patch for snapshot source, and configure like this:

./configure --enable-replication
make & make install

4. install the GCS ensemble, according the document :
http://www.cs.technion.ac.il/dsl/projects/Ensemble/doc.html

5. start ensemble daemon and gossip if neccessary ( yes, make sure the two nodes can 'GCS' each other)

3. Assume that you have two nodes, start up postgresql and create a database 'db', and create a table 'tb' for testing which should be have a primary key for all nodes.

4. At the origin node, execute the command at psql console:
alter database db start replication in group gcs; (which means the database 'db' is the origin and the group 'gcs' is the GCS group name)

5. At the subscriber node, execute the command:
alter database db accept replication from group gcs;


Hope information above would be helpful, and keep in touch.

leiyonghua

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[PERFORM] Slow query with a lot of data

Hi,

I run this query:

select max(a."user"), b.category, count(1) from result a,
domain_categories b where a."domain" = b."domain" group by b.category;

the table result contains all websites a user visited. And the table
domain_categories contains all categories a domain is in.
result has 20 Mio rows and domain_categories has about 12 Mio. There
are 500.000 different users.

I have indexes on result.domain, domain_categories.domain,
result.user, domain_categories.category. Clustered result on user and
domain_categories on domain.

explain analyze says (limited to one user with id 1337):

"HashAggregate (cost=2441577.16..2441614.72 rows=2504 width=8)
(actual time=94667.335..94671.508 rows=3361 loops=1)"
" -> Merge Join (cost=2119158.02..2334105.00 rows=14329622 width=8)
(actual time=63559.938..94621.557 rows=36308 loops=1)"
" Merge Cond: (a.domain = b.domain)"
" -> Sort (cost=395.52..405.49 rows=3985 width=8) (actual
time=0.189..0.211 rows=19 loops=1)"
" Sort Key: a.domain"
" Sort Method: quicksort Memory: 27kB"
" -> Index Scan using result_user_idx on result a
(cost=0.00..157.21 rows=3985 width=8) (actual time=0.027..0.108
rows=61 loops=1)"
" Index Cond: ("user" = 1337)"
" -> Materialize (cost=2118752.28..2270064.64 rows=12104989
width=8) (actual time=46460.599..82336.116 rows=12123161 loops=1)"
" -> Sort (cost=2118752.28..2149014.75 rows=12104989
width=8) (actual time=46460.592..59595.851 rows=12104989 loops=1)"
" Sort Key: b.domain"
" Sort Method: external sort Disk: 283992kB"
" -> Seq Scan on domain_categories b
(cost=0.00..198151.89 rows=12104989 width=8) (actual
time=14.352..22572.869 rows=12104989 loops=1)"
"Total runtime: 94817.058 ms"

This is running on a pretty small server with 1gb of ram and a slow
sata hd. Shared_buffers is 312mb, max_fsm_pages = 153600. Everything
else is commented out. Postgresql v8.3.3. Operating system Ubuntu 8.04.

It would be great if someone could help improve this query. This is
for a research project at my university.

Thanks in advance,

Moritz


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Cross Join Problem

"Gauri Kanekar" <meetgaurikanekar@gmail.com> writes:
> Following is the Query :
> SELECT sum(id), sum(cd), sum(ad)
> FROM table1 a , table2 b cross join table3 c
> WHERE a.nkey = b.key
> AND a.dkey = c.key
> AND c.date = '2008-02-01'
> AND b.id = 999 ;


> We have fired this on our production system which is postgres 8.1.3, and got
> the following explain analyse of it

> Aggregate (cost=11045.52..11045.53 rows=1 width=24) (actual
> time=79.290..79.291 rows=1 loops=1)
> -> Nested Loop (cost=49.98..11043.42 rows=279 width=24) (actual
> time=1.729..50.498 rows=10473 loops=1)
> -> Nested Loop (cost=0.00..6.05 rows=1 width=8) (actual
> time=0.028..0.043 rows=1 loops=1)
> -> Index Scan using rnididx on table2 b (cost=0.00..3.02
> rows=1 width=4) (actual time=0.011..0.015 rows=1 loops=1)
> Index Cond: (id = 999)
> -> Index Scan using rddtidx on table3 c (cost=0.00..3.02
> rows=1 width=4) (actual time=0.010..0.016 rows=1 loops=1)
> Index Cond: (date = '2008-02-01 00:00:00'::timestamp
> without time zone)
> -> Bitmap Heap Scan on table1 a (cost=49.98..10954.93 rows=5496
> width=32) (actual time=1.694..19.006 rows=10473 loops=1)
> Recheck Cond: ((a.nkey = "outer"."key") AND (a.dkey =
> "outer"."key"))
> -> Bitmap Index Scan on rndateidx (cost=0.00..49.98
> rows=5496 width=0) (actual time=1.664..1.664 rows=10473 loops=1)
> Index Cond: ((a.nkey = "outer"."key") AND (a.dkey =
> "outer"."key"))
> Total runtime: 79.397 ms

No PG release since 7.3 would have voluntarily planned that query that
way. Maybe you were using join_collapse_limit = 1 to force the join
order?

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[pgsql-es-ayuda] Crear un instalador postgres con usuario y password ya definidos..

Buenos dias señores..

Tengo una aplicacion que usa postgresql como servidor de datos lo que
yo quiero hacer es que al instalar automaticamente coloque un usuario
y password que yo defina mas antes.. como postgresql es libre pense
que talvez eso es legal porque no modificaria nada.. y si es correcto
como puedo hacer por favor.. necesito que me ayuden.. pero en ningun
momento quiero infringir ninguna derecho de auntor ni nada por el
estilo..

Gracias a todos...

--
Edwin Salguero C.
70738223
http://www.kernel.com.bo
--
TIP 6: �Has buscado en los archivos de nuestra lista de correo?
http://archives.postgresql.org/pgsql-es-ayuda

Re: [pgsql-es-ayuda] Postgres Vs. MySQL

Xavier Vidal Piera escribió:

> Igualmente, me gustaría cual es el proyecto de replicación más estable hasta
> el momento y que reproduzca el mismo esquema que permite MySQL : un master -
> N esclavos.

Slony-I.

> He visto que Slony es capaz pero también me han dicho que es una
> tortura el mantenimiento de un esquema con dicho sistema.

Es bastante trabajoso, pero tanto como "tortura" ...

--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"The only difference is that Saddam would kill you on private, where the
Americans will kill you in public" (Mohammad Saleh, 39, a building contractor)
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?
http://archives.postgresql.org/pgsql-es-ayuda

[COMMITTERS] pgsql: Properly remove src\include\utils\probes.h when running

Log Message:
-----------
Properly remove src\include\utils\probes.h when running clean.bat.

Modified Files:
--------------
pgsql/src/tools/msvc:
clean.bat (r1.12 -> r1.13)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/tools/msvc/clean.bat?r1=1.12&r2=1.13)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[PERFORM] Cross Join Problem

Hi,


Following is the Query :
SELECT sum(id), sum(cd), sum(ad)
       FROM table1 a , table2 b cross join table3 c
       WHERE a.nkey = b.key
             AND a.dkey = c.key
             AND c.date = '2008-02-01'
             AND b.id = 999 ;


We have fired this on our production system which is postgres 8.1.3, and got the following explain analyse of it

 Aggregate  (cost=11045.52..11045.53 rows=1 width=24) (actual time=79.290..79.291 rows=1 loops=1)
   ->  Nested Loop  (cost=49.98..11043.42 rows=279 width=24) (actual time=1.729..50.498 rows=10473 loops=1)
         ->  Nested Loop  (cost=0.00..6.05 rows=1 width=8) (actual time=0.028..0.043 rows=1 loops=1)
               ->  Index Scan using rnididx on table2 b  (cost=0.00..3.02 rows=1 width=4) (actual time=0.011..0.015 rows=1 loops=1)
                     Index Cond: (id = 999)
               ->  Index Scan using rddtidx on table3 c  (cost=0.00..3.02 rows=1 width=4) (actual time=0.010..0.016 rows=1 loops=1)
                     Index Cond: (date = '2008-02-01 00:00:00'::timestamp without time zone)
         ->  Bitmap Heap Scan on table1 a  (cost=49.98..10954.93 rows=5496 width=32) (actual time=1.694..19.006 rows=10473 loops=1)
               Recheck Cond: ((a.nkey = "outer"."key") AND (a.dkey = "outer"."key"))
               ->  Bitmap Index Scan on rndateidx  (cost=0.00..49.98 rows=5496 width=0) (actual time=1.664..1.664 rows=10473 loops=1)
                     Index Cond: ((a.nkey = "outer"."key") AND (a.dkey = "outer"."key"))
 Total runtime: 79.397 ms

Time: 80.752 ms



Same Query when we fire on postgres 8.3.3, following is the explain analyse
                                                                                    QUERY PLAN                              
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1171996.35..1171996.36 rows=1 width=24) (actual time=6360.783..6360.785 rows=1 loops=1)
   ->  Nested Loop  (cost=0.00..1171994.28 rows=275 width=24) (actual time=3429.309..6330.424 rows=10473 loops=1)
         Join Filter: (a.nkey = b.key)
         ->  Index Scan using rnididx on table2 b  (cost=0.00..4.27 rows=1 width=4) (actual time=0.030..0.033 rows=1 loops=1)
               Index Cond: (id = 999)
         ->  Nested Loop  (cost=0.00..1169411.17 rows=206308 width=28) (actual time=0.098..4818.450 rows=879480 loops=1)
               ->  Index Scan using rddtidx on table1 c  (cost=0.00..4.27 rows=1 width=4) (actual time=0.031..0.034 rows=1 loops=1)
                     Index Cond: (date = '2008-02-01 00:00:00'::timestamp without time zone)
               ->  Index Scan using rdnetidx on table1 a  (cost=0.00..1156050.51 rows=1068511 width=32) (actual time=0.047..1732.229 rows=879480 loops=1)
                     Index Cond: (a.dkey = c.key)
 Total runtime: 6360.978 ms


The Query on postgres 8.1.3 use to take only 80.752 ms is now taking 6364.950 ms.

We have done vacuum analyse on all the tables.

Can anybody helpout over here ... was may b wrong... and why the query seems to take time on postgres 8.3.3.

Is it 8.3.3 problem or its cross join problem on 8.3.3

Thanx

--
Regards
Gauri

Re: [HACKERS] any psql static binary for iphone ?

On Mon, Aug 18, 2008 at 9:16 AM, Hans-Juergen Schoenig <postgres@cybertec.at>
>>> compilations stops at gram.c however :) the file is just too big to
>>> compile
>>> on 96MB of RAM :).
>>> first the screen turns to black and then it reboots.
>>> so far i have not seen how i can add a swap file to the iphone and i was
>>> too
>>> lazy to cross compile *g*.
>>> but until gram.c - no warning; no errors *g*.
>>>
>>
>> iirc you don't have to compile gram.c for psql?
>>
>
> no, not for psql ...
> i wanted to give pgbench a try.
> just plain curiosity.


you of all people should know what to do next :-)

http://osdir.com/ml/db.postgresql.advocacy/2004-03/msg00018.html

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] any psql static binary for iphone ?

postgres seems to compile nicely on the iphone.
>> compilations stops at gram.c however :) the file is just too big to compile
>> on 96MB of RAM :).
>> first the screen turns to black and then it reboots.
>> so far i have not seen how i can add a swap file to the iphone and i was too
>> lazy to cross compile *g*.
>> but until gram.c - no warning; no errors *g*.
>>
>
> iirc you don't have to compile gram.c for psql?
>
> merlin
>

no, not for psql ...
i wanted to give pgbench a try.
just plain curiosity.

hans

--
Cybertec Schönig & Schönig GmbH
PostgreSQL Solutions and Support
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Tel: +43/1/205 10 35 / 340
www.postgresql-support.de, www.postgresql-support.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] any psql static binary for iphone ?

On Mon, Aug 18, 2008 at 4:02 AM, Hans-Juergen Schoenig
<postgres@cybertec.at> wrote:
> Peter Eisentraut wrote:
>>
>> Am Sunday, 17. August 2008 schrieb Oleg Bartunov:
>>
>>>
>>> is there psql static binary, which I can use on my iphone (version 1) ?
>>>
>>
>> I have no idea, but just as a thought, using phpPgAdmin might be a good
>> workaround.
>>
>>
>
> postgres seems to compile nicely on the iphone.
> compilations stops at gram.c however :) the file is just too big to compile
> on 96MB of RAM :).
> first the screen turns to black and then it reboots.
> so far i have not seen how i can add a swap file to the iphone and i was too
> lazy to cross compile *g*.
> but until gram.c - no warning; no errors *g*.

iirc you don't have to compile gram.c for psql?

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Cluster Up-time.

On Mon, Aug 18, 2008 at 03:53:32AM -0700, Alexi Gen wrote:
> Is there a table/view available from where I can check what time the cluster was started?
> Need this to calculate the uptime of the cluster.

In PostgreSQL 8.1 and later you can run these queries to get the start
time and uptime:

SELECT pg_postmaster_start_time();
SELECT now() - pg_postmaster_start_time();

--
Michael Fuhr

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Compatibility types, type aliases, and distinct types

In my experience synonyms as well as rules are hacks and should be avoided althou there are cases where they can save some work for dba's during transitions from one situation to better one.

> There is also another possible way one might want to create a compatibility
> type.  Instead of creating a new type, create an alias for an existing type,
> much like we currently have built-in mappings for int -> int4, bigint ->
> int8, etc.  The difference here is that the type you put in is not the same
> as the one you get dumped out.  So depending on taste and requirements, a
> user might want to choose the distinct type or the alias route.

Example or two would be helpful here where you expect this kind of functionality be useful. Could you use it for defining Oracle compatibel varchar2 and how would it work then?

On Mon, Aug 18, 2008 at 3:33 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Peter Eisentraut (peter_e@gmx.net) wrote:
> There is also another possible way one might want to create a compatibility
> type.  Instead of creating a new type, create an alias for an existing type,
> much like we currently have built-in mappings for int -> int4, bigint ->
> int8, etc.  The difference here is that the type you put in is not the same
> as the one you get dumped out.  So depending on taste and requirements, a
> user might want to choose the distinct type or the alias route.

The alias route gets me thinking about Oracle synonyms..  That'd be nice
to have in PG for a number of object types.  Most recently I was wishing
I could create a schema synonym, though being able to do tables/views
would have worked as well in that case, just a bit more work.

> What do you think about adding this kind of support to PostgreSQL?  Obviously,
> some details need to be worked out, but most of this is actually
> straightforward catalog manipulation.

I like the concept.  Not sure how much I'd end up using it, personally.

       Thanks,

               Stephen

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkipbCgACgkQrzgMPqB3kiinmwCfROrhdu8YDpzsJvOtvpSW147O
SOQAn3y/4MGadFz9VqDsmcm8fiKuxsn5
=gdfU
-----END PGP SIGNATURE-----


Re: [GENERAL] Regression failing on build -> ERROR: could not access file "$libdir/plpgsql": No such file or directory

On Fri, 2008-08-15 at 16:37 -0400, Tom Lane wrote:
> Reid Thompson <Reid.Thompson@ateb.com> writes:
> > Would plpgsql.so get built with..
>
> > ./configure --prefix=/usr/local/pgsql833 --without-readline --disable-shared
> ^^^^^^^^^^^^^^^^
>
> Uh, no. That probably explains why regress.so didn't get built, either.
>
> regards, tom lane

OK -- rebuilt without '--disable-shared', and
=======================
All 114 tests passed.
=======================

1) Should the Docs note that '--disable-shared' will prevent the compile
time regression checks from running
2) Should the compilation note to the user that '--disable-shared' will
prevent compile time regression checks from running
3) should the error message output when regression fails for the above
reason not be along the lines of
...snip...
gmake[2]: stat:regress.so: There are too many levels of symbolic links to translate a path name.
rm -f regress.so
ln -s regress.so
gmake[2]: stat:regress.so: There are too many levels of symbolic links to translate a path name.
...snip...
4) if '--disable-shared' is passed, should the user be notified that
compile time regression checks will be skipped ( and then skip them ).

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Compatibility types, type aliases, and distinct types

* Peter Eisentraut (peter_e@gmx.net) wrote:
> There is also another possible way one might want to create a compatibility
> type. Instead of creating a new type, create an alias for an existing type,
> much like we currently have built-in mappings for int -> int4, bigint ->
> int8, etc. The difference here is that the type you put in is not the same
> as the one you get dumped out. So depending on taste and requirements, a
> user might want to choose the distinct type or the alias route.

The alias route gets me thinking about Oracle synonyms.. That'd be nice
to have in PG for a number of object types. Most recently I was wishing
I could create a schema synonym, though being able to do tables/views
would have worked as well in that case, just a bit more work.

> What do you think about adding this kind of support to PostgreSQL? Obviously,
> some details need to be worked out, but most of this is actually
> straightforward catalog manipulation.

I like the concept. Not sure how much I'd end up using it, personally.

Thanks,

Stephen

[SQL] COPY TO with FORCE QUOTE *

I would like to be able to force quotes on all columns in the COPY TO
command without having to name all columns explicitly. E.g.:

COPY a TO '/tmp/a.csv' WITH CSV FORCE QUOTE *;

Reading the documentation, there doesn't seem to be a way to do this.
Am I missing something? If not, could I submit this as a feature
request?

many thanks, jackson

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] any psql static binary for iphone ?

On Mon, 18 Aug 2008, Gregory Stark wrote:

>
> I haven't looked at it but there's this:
>
> http://www.postgresql.org/about/news.988

Yes, I know it. But, it's not free and one should use iTunes program
to buy it from AppStore.

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] So what about XSLT?

Am Friday, 15. August 2008 schrieb Tom Lane:
> > well, contrib/xml2/xslt_proc.c has 172 lines. So I suggest we just
> > import that to core and drop the rest of the module as redundant.
>
> I assume that wouldn't provide the functionality Peter wants; else the
> above would have happened already in 8.3.

Well, another part of my hesitation was the question whether we want to deal
with yet another library. But since libxslt has since snuck into configure
and is well-established, most of that concern goes away.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] failed to re-find parent key in "..."

On Mon, Aug 18, 2008 at 3:09 AM, Joao Ferreira gmail
<joao.miguel.c.ferreira@gmail.com> wrote:
> What risks are we facing if we choose not to update postgresql ?

http://www.postgresql.org/support/versioning


--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[HACKERS] Compatibility types, type aliases, and distinct types

I have been hacking around for a while trying to create some example Oracle
compatibility types. Canonical examples: varchar2 and number. With the new
features in 8.3 and 8.4, such as user-definable typmods and type categories,
it appears to be actually possible to create a type equivalent to numeric or
varchar entirely in user space. Cool.

Actually doing this, however, appears to be shockingly complicated. You need
to redefine all the input/output/send/receive functions and all the cast
functions and casts and then tie them all together. I don't expect that this
is something a user would succeed in, and not even an experienced developer
would want to type all that in. I actually had to write a script to generate
all that code.

So while thinking about how to make this simpler I remembered the "distinct
type" feature of SQL, which works quite similarly, namely the new type has
the same structure as the old type, but is a separate entity. It looks like

CREATE TYPE newtype AS oldtype;

This feature by itself could be quite useful, and then we could simply add
something like

CREATE TYPE newtype AS oldtype WITH CASTS;

to copy all the casts as well, so the new type can be used in contexts where
the old type could be used.

There is also another possible way one might want to create a compatibility
type. Instead of creating a new type, create an alias for an existing type,
much like we currently have built-in mappings for int -> int4, bigint ->
int8, etc. The difference here is that the type you put in is not the same
as the one you get dumped out. So depending on taste and requirements, a
user might want to choose the distinct type or the alias route.

What do you think about adding this kind of support to PostgreSQL? Obviously,
some details need to be worked out, but most of this is actually
straightforward catalog manipulation.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] migrate data 6.5.3 -> 8.3.1

Tom Lane wrote:
> A further suggestion is that you use -d or even -D option on the dump.
> I think there have been some corner-case changes in COPY data format
> since 6.5 days; which might or might not bite you, but why take the
> chance ...


%/usr/local/pgsql/bin/pg_dump -D itt_user > itt_user.dump
Backend sent D message without prior T
Backend sent D message without prior T
...
... (about 2 screens of same messages)
...
Backend sent D message without prior T
Backend sent D message without prior T

and then it hangs.

Then i've tried it with -d option:

%/usr/local/pgsql/bin/pg_dump -d itt_user > itt_user.dump
Killed

I didn't killed pg_dump, so i think it was killed by system after
pg_dump grows out of some system limit. Size of itt_user.dump is
something about 2Mb (1974272), last strings in that file are:

INSERT INTO "ip_log" VALUES ('1.1.1.1','user1',300000);
INSERT INTO "ip_log" VALUES ('1.1.1.2','user2',500000);
INSERT INTO "ip_log" VALUES

I crossed my fingers for those dumps i did previously to work.

--
alexander lunyov

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Changing between ORDER BY DESC and ORDER BY ASC

On Fri, Aug 15, 2008 at 9:35 PM, William Garrison <postgres@mobydisk.com> wrote:
Is there an easy way to write one single query that can alternate between ASC and DESC orders?  Ex:


Take a look at this link
http://www.mail-archive.com/pgsql-general@postgresql.org/msg111788.html

--
Regards,
Sergey Konoplev

Re: [HACKERS] Overhauling GUCS

What I'm interested in is auto-tuning, not necessarily overhauling GUCS, which happens to be the subject of this thread :-)
Having done a SELECT * FROM pg_settings, all the information you need seems to be there...
Maybe I'm being over-simplistic here, but the important bit is knowing how you should tune stuff - and this is what I'm hoping to learn through this process.
Now, you could probably sidestep UI and GUCS concerns by moving the auto-tuning process inside the database. You don't need fancy GUIs for guessing configuration parameters, and if you can already do that, coming up with a GUI should be pretty straightforward.
For example, I see no reason why you couldn't capture the logic of tuning in a couple of PL/Python functions to look up usage stats, size of indices etc. PL/Python being an "untrusted" language, you could even write a new postgresql.conf file to disk, with the suggested alterations. Cheap, quick and cheerful!

Perhaps the auto-tuning conversation should take place in a separate thread, how do you feel about changing the subject line? The most insteresting bit is discussing and testing tuning strategies. This, of course, is related to the [admin] and [perform] side of things, but there is also a development dimension. As soon as there is a tuning strategy published, a number of tools will certainly follow.

Michael

Yes, there's a grand plan for a super-wizard that queries the database for size, index, and statistics information for figure out what to do; I've been beating that drum for a while now.  Unfortunately, the actual implementation is blocked behind the dreadfully boring work of sorting out how to organize and manage the GUC information a bit better, and the moderately boring work of building a UI for modifying things.  If you were hoping to work on the sexy autotuning parts without doing some of the grunt work, let me know if you can figure out how so I can follow you.


Re: [SQL] Select function with set return type

Nacef LABIDI <nacef.l@gmail.com> schrieb:

> Hi all,
>
> I am writing some functions with retrun type as a SETOF of a datatype that I
> have defined. How can I test them with a select statement.
> Doing select my_function(); return set valued function called in context that
> cannot accept a set

Try 'select * from my_function();'


Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect. (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly." (unknown)
Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [PERFORM] Optimizing a VIEW

On Fri, 15 Aug 2008, Madison Kelly wrote:
> Below I will post the VIEW and a sample of the query's EXPLAIN ANALYZE.
> Thanks for any tips/help/clue-stick-beating you may be able to share!

This query looks incredibly expensive:

> SELECT
...
> FROM
> customer a,
> history.customer_data b,
> history.customer_data c,
> history.customer_data d,
> history.customer_data e,
> history.customer_data f,
> history.customer_data g,
> history.customer_data h,
> history.customer_data i,
> history.customer_data j,
> history.customer_data k,
> history.customer_data l
> WHERE
> a.cust_id=b.cd_cust_id AND
> a.cust_id=c.cd_cust_id AND
> a.cust_id=d.cd_cust_id AND
> a.cust_id=e.cd_cust_id AND
> a.cust_id=f.cd_cust_id AND
> a.cust_id=g.cd_cust_id AND
> a.cust_id=h.cd_cust_id AND
> a.cust_id=i.cd_cust_id AND
> a.cust_id=j.cd_cust_id AND
> a.cust_id=k.cd_cust_id AND
> a.cust_id=l.cd_cust_id AND
...

I would refactor this significantly, so that instead of returning a wide
result, it would return more than one row per customer. Just do a single
join between customer and history.customer_data - it will run much faster.

Matthew

--
Here we go - the Fairy Godmother redundancy proof.
-- Computer Science Lecturer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[SQL] Select function with set return type

Hi all,

I am writing some functions with retrun type as a SETOF of a datatype that I have defined. How can I test them with a select statement.
Doing select my_function(); return set valued function called in context that cannot accept a set

Thanks to all

Nacef

[NOVICE] Triggers and Function.

How can I identify if a column is actually part of the UPDATE that fired
the trigger.

For example

If I have a table with three columns col1, col2 and col3. The types
don't matter to the question but may to the answer.

I run "UPDATE example set col1 = 'NewVal', col2 = 2 where col1 =
'OldVal';"

A BEFORE TRIGGER can test OLD.* against NEW.* and see col1 has changed
col2 may or may not, OK to track what has actually changed, but how can
I find out that col3 was not part of the update?

I need to know if the client has updated a column, even if it still has
the same value.


Nottingham Clinical Research Limited
Isaac Newton Centre, Nottingham Science and Technology Park, Nottingham NG7 2RH England
Registered in England No. 2244384

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [SPAM?]: Re: [HACKERS] proposal sql: labeled function params

On Mon, 2008-08-18 at 10:51 +0300, Peter Eisentraut wrote:
> Am Saturday, 16. August 2008 schrieb Hannu Krosing:
> > A "label" is the same thing as "variable"/"attribute"/"argument name" in
> > all programming languages I can think of. Why do you need two kinds of
> > argument names in postgreSQL ?
> >
> > maybe you are after something like keyword arguments in python ?
> >
> > http://docs.python.org/tut/node6.html#SECTION006720000000000000000
> >
> > keyword arguments are a way of saying that you don't know all variable
> > names (or "labels" if you prefer) at function defining time and are
> > going to pass them in when calling.
>
> I think we are beginning to talk about the same thing. (Meaning you and me --
> not sure about the rest. :) )

Yes, I noticed that. Maybe we are onto something ;)

The exact moment I sent my mail away, I received yours.

------------
Hannu


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

On Mon, 2008-08-18 at 08:53 +0200, Pavel Stehule wrote:
> 2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
> > On Sun, 2008-08-17 at 17:59 +0200, Pavel Stehule wrote:
> >> Hannu
> >>
> >> it's not possible in plpgsql, because we are not able iterate via record.
> >
> > just add function for iterating over record :)
>
> it's not easy, when iterating should be fast - when record's field has
> different types, than isn't possible cache execution plan.

the iterator should convert them to some common type like TEXT

------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Cluster Up-time.

Hello,

Is there a table/view available from where I can check what time the cluster was started?
Need this to calculate the uptime of the cluster.
Or is there something else that I need to do in order to calculate this?
Any help on this is appreciated.

Cheers!
Alexi


Send instant messages to your online friends http://uk.messenger.yahoo.com

Re: [SQL] INSERT or UPDATE

am Mon, dem 18.08.2008, um 9:44:48 +0200 mailte Andreas Kraftl folgendes:
> But there is also the possibility, that one row is available. Means that
> i need an UPDATE instead of the INSERT.
>
> Is it in SQL possible to decide if there is an UPDATE or an INSERT or
> must i program something with PHP for example?

You can use a own function for this, see
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
Example 38-1. Exceptions with UPDATE/INSERT


HTH, Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[COMMITTERS] stackbuilder - wizard: Ensure the wizard will exit cleanly on Mac when

Log Message:
-----------
Ensure the wizard will exit cleanly on Mac when completed.

Modified Files:
--------------
wizard:
StackBuilder.cpp (r1.8 -> r1.9)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/StackBuilder.cpp.diff?r1=1.8&r2=1.9)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[GENERAL] Re: pg_restore fails on Windows

Magnus Hagander wrote:
> Tom Lane wrote:
> > I wrote:
> >> Of course the larger issue is why it's failing --- 150MB doesn't seem
> >> like that much for a modern machine. I suspect that PQerrorMessage()
> >> would tell us something useful, but pg_restore isn't letting us see it.
> >
> > I've applied a patch for the latter issue. But the only way we can find
> > out what's happening is if someone will build a Windows version from CVS
> > tip for the OP...
>
> Attached is a pg_restore.exe off CVS tip today, which should include the
> patch. Please try this one.
>
> //Magnus
>

I tested the restore using the provided pg_restore.exe. The output is:

for --inserts mode:
-------------------------------
pg_restore: connecting to database for restore
Password:
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating TABLE hibtableattachmentxmldata
pg_restore: restoring data for table "hibtableattachmentxmldata"
pg_restore: [archiver (db)] Error from TOC entry 1731; 0 16429 TABLE DATA hibtableattachmentxmldata postgres
pg_restore: [archiver (db)] could not execute query: could not send data to server: No buffer space available (0x0000274
7/10055)
Command was: INSERT INTO hibtableattachmentxmldata VALUES ('1111', 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFFFFFFF...
pg_restore: setting owner and privileges for SCHEMA public
pg_restore: setting owner and privileges for COMMENT SCHEMA public
pg_restore: setting owner and privileges for ACL public
pg_restore: setting owner and privileges for TABLE hibtableattachmentxmldata
WARNING: errors ignored on restore: 1

for COPY mode
------------------------
pg_restore: connecting to database for restore
Password:
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating TABLE hibtableattachmentxmldata
pg_restore: restoring data for table "hibtableattachmentxmldata"
pg_restore: [archiver (db)] error returned by PQputCopyData: could not send data to server: No buffer space available (0
x00002747/10055)
pg_restore: *** aborted because of error

The restore in the COPY mode was obviously aborted, whereas the --inserts finished with warnings.

Note: the provided pg_restore.exe is not compiled with the support for compressed dumps.

Regards,
Tomas

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] failed to re-find parent key in "..."

Hello all,

a few days ago I bumped into this:

---------------------------------------------
# vacuumdb -f -z -a
vacuumdb: vacuuming database "postgres"
VACUUM
vacuumdb: vacuuming database "rtdata"
vacuumdb: vacuuming of database "rtdata" failed: ERROR: failed to
re-find parent key in "timeslots_strs_var_ts_key"
---------------------------------------------

We are using postgres 8.1.4 and I realise this has been fixed in 8.1.6
but...

...before updating all our servers I need to have an idea of the extent
of the impact of this error.

It seems to me that this error may cause the VACUUM, AUTOVACUUM and
VACUUM FULL operations to fail partially or totally. Am I correct ? Or
does this only have impact on VACUUM FULL ?

We run many servers with a database application very heavily updated. In
this scenario do I tell my boss that "updating postgres would be nice"
or that we "we _must_ update postgres" ?

What risks are we facing if we choose not to update postgresql ?

thanks a lot for your advice.

joao

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsnap - pgsnap: Add non-default configuration report.

Log Message:
-----------
Add non-default configuration report.

Modified Files:
--------------
pgsnap/pgsnap:
ChangeLog (r1.69 -> r1.70)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/ChangeLog.diff?r1=1.69&r2=1.70)
pgsnap/pgsnap/lib:
navigate.php (r1.26 -> r1.27)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/navigate.php.diff?r1=1.26&r2=1.27)
param.php (r1.10 -> r1.11)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/param.php.diff?r1=1.10&r2=1.11)
reports.php (r1.7 -> r1.8)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/reports.php.diff?r1=1.7&r2=1.8)

Added Files:
-----------
pgsnap/pgsnap/lib:
nondefaultparam.php (r1.1)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgsnap/pgsnap/pgsnap/lib/nondefaultparam.php?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] What's size of your PostgreSQL Database?

-----Original Message-----
From: Scott Marlowe <scott.marlowe@gmail.com>
>If you throw enough drives on a quality RAID controller at it you can
>get very good throughput. If you're looking at read only / read
>mostly, then RAID5 or 6 might be a better choice than RAID-10. But
>RAID 10 is my default choice unless testing shows RAID-5/6 can beat
>it.

I'm loading my slave server with RAID-0 based on 3 IDE 7200 Drives.
Is this worst off than a RAID 5 implementation?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] stackbuilder - wizard: Cleanup temp files on the mac.

Log Message:
-----------
Cleanup temp files on the mac.

Modified Files:
--------------
wizard:
App.cpp (r1.30 -> r1.31)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/App.cpp.diff?r1=1.30&r2=1.31)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] any psql static binary for iphone ?

I haven't looked at it but there's this:

http://www.postgresql.org/about/news.988

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Experiences with BLOB + PostgreSQL

On 2008-08-14 20:03, juliano.freitas@ati.pe.gov.br wrote:

> I'd like to ask you about some experience in managing huge databases which
> store mostly binary files.

Do you mean BYTEA or large objects? Both have pros and cons.

> We're developing a system which is likely to grow up to terabytes in
> some years and I'd like to hear something from people who really
> administrate these kinds of databases.

I do not really administrate this kind of database, but I see one
serious problem - a major version upgrade would be very hard.

A filesystem storage with metadata stored in a database would be much
better.

Pozdrawiam
Tometzky
--
Best of prhn - najzabawniejsze teksty polskiego UseNet-u
http://prhn.dnsalias.org/
Chaos zawsze pokonuje porządek, gdyż jest lepiej zorganizowany.
[ Terry Pratchett ]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Postgres-R

niranjan.k@nsn.com 写道:
> I wish to set up the Postgres-R environment, could you please let me know the steps for setting it up.
> Thanks.
>
>
>
yeah, actually, i have not been successful to set up this, but let me
give some information for you.
1. download the postgresql snapshot source code from here:
http://www.postgresql.org/ftp/snapshot/dev/
(this is a daily tarball)

2. Get the corresponding patch for postgres-r from:
http://www.postgres-r.org/downloads/

3. apply the patch for snapshot source, and configure like this:

./configure --enable-replication
make & make install

4. install the GCS ensemble, according the document :
http://www.cs.technion.ac.il/dsl/projects/Ensemble/doc.html

5. start ensemble daemon and gossip if neccessary ( yes, make sure the
two nodes can 'GCS' each other)

3. Assume that you have two nodes, start up postgresql and create a
database 'db', and create a table 'tb' for testing which should be have
a primary key for all nodes.

4. At the origin node, execute the command at psql console:
alter database db start replication in group gcs;
(which means the database 'db' is the origin and the group 'gcs' is the
GCS group name)

5. At the subscriber node, execute the command:
alter database db accept replication from group gcs;


Hope information above would be helpful, and keep in touch.

leiyonghua

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-it-generale] [itpug] Appello: abbinamento pubblicitario per il PGDay

Gabriele Bartolini ha scritto:

> vi scrivo per informarvi che ad oggi, nessun partner (o
> inappropriatamente detto 'sponsor') italiano si è fatto avanti per il
> PGDay Italiano / Europeo.

La cosa non mi sorprende (purtroppo): le iniziative ormai sono molto
frequenti, e le ditte sono sotto pressione; inoltre, la percezione
(giusta o sbagliata che sia) e' che queste iniziative, per quanto utili,
portino poco in termini di business.
Abbiamo esperienza analoghe in GFOSS.it.
Saluti.
pc
--
Paolo Cavallini, see: * http://www.faunalia.it/pc *

--
Sent via pgsql-it-generale mailing list (pgsql-it-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-it-generale

[pgsql-it-generale] Appello: abbinamento pubblicitario per il PGDay

Gentili soci,

vi scrivo per informarvi che ad oggi, nessun partner (o
inappropriatamente detto 'sponsor') italiano si è fatto avanti per il
PGDay Italiano / Europeo.

La cosa è sinceramente alquanto desolante, soprattutto visto
l'impegno che stiamo mettendo nell'organizzare il più importante evento
di PostgreSQL in Italia e in Europa mai fatto.

Invito tutti i soci (e non) di ITPUG che hanno una propria azienda o
che lavorano in una azienda, a prendere l'iniziativa e/o farsi portavoce
della mia richiesta di partnership. Il ritorno in pubblicità varrà
l'esborso economico. Inoltre, gli abbinamenti pubblicitari saranno
corredati da regolare contratto e fattura con la nostra associazione e
quindi saranno totalmente deducibili.

Per maggiori informazioni:

http://www.pgday.org/it/sponsor/come

Vi ringrazio.

Ciao,
Gabriele

--
Gabriele Bartolini: Open source programmer and data architect
Current Location: Prato, Tuscany, Italy
Associazione Italian PostgreSQL Users Group: www.itpug.org
gabriele.bartolini@gmail.com | www.gabrielebartolini.it
"If I had been born ugly, you would never have heard of Pelé", George Best
http://www.linkedin.com/in/gbartolini

--
Sent via pgsql-it-generale mailing list (pgsql-it-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-it-generale

Re: [HACKERS] Patch: plan invalidation vs stored procedures

2008/8/18 Dimitri Fontaine <dfontaine@hi-media.com>:
> Hi,
>
> Le lundi 18 août 2008, Andrew Dunstan a écrit :
>> > On Sat, Aug 16, 2008 at 09:40:19PM -0400, Tom Lane wrote:
>> >> This is not the kind of patch we put into stable branches.
>>
>> So what? That is not the only criterion for backpatching.
>
> I fail to understand why this problem is not qualified as a bug.
>

Does it change of result some queries? It is protection to server's hang?

> Regards,
> --
> dim
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] IN vs EXISTS equivalence

Hello

I did some fast test on pagila database.

8.4
postgres=# explain analyze select * from film f where exists (select
film_id from film_actor where f.film_id = film_id);
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=117.01..195.42 rows=966 width=390) (actual
time=36.011..43.314 rows=997 loops=1)
Hash Cond: (f.film_id = film_actor.film_id)
-> Seq Scan on film f (cost=0.00..65.00 rows=1000 width=390)
(actual time=0.027..1.971 rows=1000 loops=1)
-> Hash (cost=104.94..104.94 rows=966 width=2) (actual
time=35.886..35.886 rows=997 loops=1)
-> HashAggregate (cost=95.28..104.94 rows=966 width=2)
(actual time=32.650..34.139 rows=997 loops=1)
-> Seq Scan on film_actor (cost=0.00..81.62 rows=5462
width=2) (actual time=0.081..14.232 rows=5462 loops=1)
Total runtime: 45.373 ms
(7 rows)

8.3
postgres=# explain select * from film f where exists (select film_id
from film_actor where f.film_id = film_id);
QUERY PLAN
------------------------------------------------------------------------------------------
Seq Scan on film f (cost=0.00..4789.34 rows=500 width=390)
Filter: (subplan)
SubPlan
-> Index Scan using idx_fk_film_id on film_actor
(cost=0.00..28.35 rows=6 width=2)
Index Cond: ($0 = film_id)
(5 rows)

postgres=# explain analyze select * from film f where not exists
(select film_id from film_actor where f.film_id = film_id);
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Hash Anti Join (cost=149.90..240.24 rows=34 width=390) (actual
time=25.473..28.169 rows=3 loops=1)
Hash Cond: (f.film_id = film_actor.film_id)
-> Seq Scan on film f (cost=0.00..65.00 rows=1000 width=390)
(actual time=0.027..1.898 rows=1000 loops=1)
-> Hash (cost=81.62..81.62 rows=5462 width=2) (actual
time=24.398..24.398 rows=5462 loops=1)
-> Seq Scan on film_actor (cost=0.00..81.62 rows=5462
width=2) (actual time=0.035..12.400 rows=5462 loops=1)
Total runtime: 28.866 ms

postgres=# explain analyze select * from film f where not exists
(select film_id from film_actor where f.film_id = film_id);
QUERY
PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on film f (cost=0.00..4789.34 rows=500 width=390) (actual
time=5.874..22.654 rows=3 loops=1)
Filter: (NOT (subplan))
SubPlan
-> Index Scan using idx_fk_film_id on film_actor
(cost=0.00..28.35 rows=6 width=2) (actual time=0.016..0.016 rows=1
loops=1000)
Index Cond: ($0 = film_id)
Total runtime: 22.835 ms
(6 rows)

postgres=# explain analyze select * from film f where film_id in
(select film_id from film_actor);
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=117.01..195.42 rows=966 width=390) (actual
time=43.151..53.688 rows=997 loops=1)
Hash Cond: (f.film_id = film_actor.film_id)
-> Seq Scan on film f (cost=0.00..65.00 rows=1000 width=390)
(actual time=0.021..6.765 rows=1000 loops=1)
-> Hash (cost=104.94..104.94 rows=966 width=2) (actual
time=43.091..43.091 rows=997 loops=1)
-> HashAggregate (cost=95.28..104.94 rows=966 width=2)
(actual time=34.754..36.275 rows=997 loops=1)
-> Seq Scan on film_actor (cost=0.00..81.62 rows=5462
width=2) (actual time=0.024..15.746 rows=5462 loops=1)
Total runtime: 55.291 ms

postgres=# explain analyze select * from film f where film_id in
(select film_id from film_actor);
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------
Nested Loop IN Join (cost=0.00..175.25 rows=986 width=390) (actual
time=0.090..14.272 rows=997 loops=1)
-> Seq Scan on film f (cost=0.00..65.00 rows=1000 width=390)
(actual time=0.014..1.877 rows=1000 loops=1)
-> Index Scan using idx_fk_film_id on film_actor (cost=0.00..0.54
rows=6 width=2) (actual time=0.007..0.007 rows=1 loops=1000)
Index Cond: (film_actor.film_id = f.film_id)
Total runtime: 15.902 ms
(5 rows)

8.4
postgres=# explain analyze select * from film f where film_id not in
(select film_id from film_actor);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Seq Scan on film f (cost=95.28..162.78 rows=500 width=390) (actual
time=24.324..26.015 rows=3 loops=1)
Filter: (NOT (hashed subplan))
SubPlan
-> Seq Scan on film_actor (cost=0.00..81.62 rows=5462 width=2)
(actual time=0.026..12.074 rows=5462 loops=1)
Total runtime: 26.201 ms
(5 rows)

8.3
postgres=# explain analyze select * from film f where film_id not in
(select film_id from film_actor);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Seq Scan on film f (cost=95.28..162.78 rows=500 width=390) (actual
time=29.713..30.456 rows=3 loops=1)
Filter: (NOT (hashed subplan))
SubPlan
-> Seq Scan on film_actor (cost=0.00..81.62 rows=5462 width=2)
(actual time=0.051..13.649 rows=5462 loops=1)
Total runtime: 30.644 ms
(5 rows)


8.4 is 10% faster than 8.3 on small table < 1000 rows. 8.4 has much
better prediction.

Regards
Pavel Stehule

2008/8/17 Tom Lane <tgl@sss.pgh.pa.us>:
> If you're still interested in testing CVS HEAD's handling of EXISTS,
> I've about finished what I wanted to do with it.
>
> regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Patch: plan invalidation vs stored procedures

Hi,

Le lundi 18 août 2008, Andrew Dunstan a écrit :
> > On Sat, Aug 16, 2008 at 09:40:19PM -0400, Tom Lane wrote:
> >> This is not the kind of patch we put into stable branches.
>
> So what? That is not the only criterion for backpatching.

I fail to understand why this problem is not qualified as a bug.

Regards,
--
dim

Re: [HACKERS] Automatic Client Failover

On Fri, 2008-08-15 at 14:25 -0400, Bruce Momjian wrote:
> Simon Riggs wrote:
> > > > Implementation would be to make PQreset() try secondary connection if
> > > > the primary one fails to reset. Of course you can program this manually,
> > > > but the feature is that you wouldn't need to, nor would you need to
> > > > request changes to 27 different interfaces either.
> > >
> > > I assumed share/pg_service.conf would help in this regard; place the
> > > file on a central server and modify that so everyone connects to another
> > > server. Perhaps we could even add round-robin functionality to that.
> >
> > I do want to keep it as simple as possible, but we do need a way that
> > will work without reconfiguration at the time of danger. It needs to be
> > preconfigured and tested, then change controlled so we all know it
> > works.
>
> OK, so using share/pg_service.conf as an implementation example, how
> would this work? The application supplies multiple service names and
> libpq tries attaching to each one in the list until one works?

This could work in one of two ways (maybe more)
* supply a group for each service. If main service goes down, try other
services in your group
* supply a secondary service for each main service. If primary goes down
we look at secondary service

It might also be possible to daisy-chain the retries, so after trying
the secondary/others we move onto the secondary's secondary in much the
same way that a telephone hunt group works.

This was suggested as a complementary feature alongside other
server-side features I'm working on. I'm not thinking of doing this
myself, since I know much less about the client side code than I'd need
to do this in the time available. Also, I'm not sure whether it is
unpopular or simply misunderstood.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers