Monday, September 8, 2008

Re: [pgsql-advocacy] Ohio LinuxFest Booth

On Sat, 2008-09-06 at 13:11 -0400, Melanie wrote:
> Honestly, no paper will make it easier to talk him into it without a
> talk than with one. What are the details? Is the booth set up for 2
> days or 1? What is handed out? When is set up? I've worked booths
> in other worlds before, is this something where things are already set
> up and we're just basically sitting talking with people? I'd assume
> so based on we don't have a booth to take and set up. We attended PG
> West last year and it isn't on the plan for this year anyhow. Andrew
> is about 65% sold on the idea and Columbus is about 8 hours from us,
> not too bad of a drive.

Hello Melanie,

You're correct, you wouldn't need to take a full both or table along.
We would need to coordinate getting swag, pamphlets and other printed
information, and banners and decorations and things like that, though.

Basically the conference would set up a table in whichever location
they've chosen for the PG booth. In previous years I don't recall any
booth being out in the open, so it's probably a fairly safe assumption
that there would be a wall behind you for a banner.

Once the table is set up, many open source enthusiasts and other
interested parties will stop by and ask all kinds of questions about
Postgres (which generally tended to be in the form of "How does it
compare to Database X?") And occasionally someone will drop by just to
express their support and donate, or grab swag and run, or even take
your picture - I was never able to figure that one out.

Most of the booth activity takes place between panels; it's relatively
idle the rest of the time. The conference also takes place only that
Saturday. An eight hour drive isn't too bad (that's about what it was
from here to East, and that was a solo drive) but keep in mind the
conference is only one day...

> I'm copying the list, I realized I didn't do a reply all the last
> time.

- Josh Williams

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [HACKERS] Fast REVERSE() function?

2008/9/8 Andrew Dunstan <andrew@dunslane.net>:
>
>
> Mario Weilguni wrote:
>>>
>>> (Aside: presumably we could walk thru the string destructively,
>>> in-place, swapping bytes; I think that would be theoretically
>>> quickest...)
>>>
>>
>> Hmmm... I guess it will not work für UTF-8 or any other multibyte charset
>>
>
> Yes, quite.

orafce contains multibyte (UTF8) reverse function.

Pavel

>
> Perl's reverse might work with UTF8 - I've never tried.
>
> cheers
>
> andrew
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] For what should pg_stop_backup wait?

On Mon, 2008-09-08 at 11:57 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> >> So thinking we should test XLogArchiveCheckDone() for both
> >> stopxlogfilename and history file and then stat for the stop WAL file:
>
> > This seems better.
>
> Somehow I missed the 5-Apr patch that introduced this bogosity.
> Next time you make a fundamental change in the behavior of a function,
> how about updating its comment to match?

It felt OK when I did it, because of the clearly named boolean. But I
accept your point and will look to improve my code comments in future.

> After studying it for a minute, I think that XLogArchiveCheckDone no
> longer even has a clearly explainable purpose; it needs to be split
> into two functions with two different behaviors. I plan to revert
> XLogArchiveCheckDone to its previous behavior and introduce
> XLogArchiveIsBusy (flipping the sense of the result) to use in
> pg_stop_backup.

You sound like you're in the middle of doing this yourself. Or would you
like me to do that?

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] inaccurate stats on large tables

Hi Kiran,

You gave great info on your problem.

First, is this the query you're actually trying to speed up, or is it a simplified version? It looks like the optimizer has already chosen the best execution plan for the given query. Since the query has no joins, we only have to consider access paths. You're fetching 58221/37909009 = 0.15% of the rows, so a sequential scan is clearly inappropriate. A basic index scan is likely to incur extra scattered I/O, so a bitmap index scan is favored.

To improve on this query's runtime, you could try any of the following:

- Reorganize the data to reduce this query's scattered I/O (i.e. cluster on "paliasorigin_search3_idx" rather than "paliasorigin_alias_casefold_idx"). Bear in mind, this may adversely affect other queries.

- Increase the cache hit frequency by ensuring the underlying filesystem cache has plenty of RAM (usually so under Linux) and checking that other concurrent queries aren't polluting the cache. Consider adding RAM if you think the working set of blocks required by most queries is larger than the combined Postgres and filesystem caches. If other processes than the db do I/O on this machine, consider them as resource consumers, too.

- Restructure the table, partitioning along a column that would be useful for pruning whole partitions for your painful queries. In this case, origin_id or tax_id seems like a good bet, but again, consider other queries against this table. 38 million rows probably makes your table around 2 GB (guessing about 55 bytes/row). Depending on the size and growth rate of the table, it may be time to consider partitioning. Out of curiosity, what runtime are you typically seeing from this query? The explain-analyze ran in 113 ms, which I'm guessing is the effect of caching, not the runtime you're trying to improve.

- Rebuild the indexes on this table. Under certain use conditions, btree indexes can get horribly bloated. Rebuilding the indexes returns them to their most compact and balanced form. For example: reindex index "paliasorigin_search3_idx"; Apart from the locking and CPU usage during the rebuild, this has no negative consequences, so I'd try this before something drastic like partitioning. First review the current size of the index for comparison: select pg_size_pretty(pg_relation_size('paliasorigin_search3_idx'));

Since you asked specifically about improving the row-count estimate, like the previous responder said, you should consider increasing the statistics target. This will help if individual columns are being underestimated, but not if the overestimate is due to joint variation. In other words, the optimizer has no way to tell if there is there a logical relationship between columns A and B such that certain values in B only occur with certain values of A. Just judging from the names, it sounds like origin_id and tax_id might have a parent-child relationship, so I thought it was worth mentioning.

Do the columns individually have good estimates?
explain analyze select * from paliasorigin where origin_id=20;
explain analyze select * from paliasorigin where tax_id=9606;

If not, increase the statistics on that column, reanalyze the table, and recheck the selectivity estimate:
alter table paliasorigin alter column origin_id set statistics 20;
analyze paliasorigin;
explain analyze select * from paliasorigin where origin_id=20;

Good luck!
Matt

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Fast REVERSE() function?

Mario Weilguni wrote:
>> (Aside: presumably we could walk thru the string destructively,
>> in-place, swapping bytes; I think that would be theoretically
>> quickest...)
>>
>
> Hmmm... I guess it will not work für UTF-8 or any other multibyte charset
>
>

Yes, quite.

Perl's reverse might work with UTF8 - I've never tried.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Fast REVERSE() function?

Hello

2008/9/8 Mario Weilguni <mario.weilguni@icomedias.com>:
>> (Aside: presumably we could walk thru the string destructively,
>> in-place, swapping bytes; I think that would be theoretically
>> quickest...)
>
> Hmmm... I guess it will not work für UTF-8 or any other multibyte charset
>

it isn't problem, but I am not sure, if ANSI SQL has this function?

Regards
Pavel Stehule


> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] [GENERAL] secure connections

2008/9/7 Filip Rembiałkowski <plk.zuber@gmail.com>:
> 2008/9/7 c k <shreeseva.learning@gmail.com>:
>> Hello,
>> I have a question regarding secure connections between pg clients and pg
>> server. As per manual for 8.3 we can use openssl for this purpose. does odbc
>> driver supports it and how? Is there any other method for this?
>
> Yes the ODBC driver (as well as all decent postgres clients) does
> support SSL. You can choose SSL mode: disable/prefer/allow/require.
>
> Other method? Encrypted tunnels. VPNs. this kind of stuff.
>
> Remember that establishing SSL session is resource - expensive. This
> can be important if you plan lots of connections.
>
> cheers
>
> --
> Filip Rembiałkowski
>

You can use stunnel + pgbouncer in lots of connections, I'll be using
it without big resources on that machine.

Greetings

Alejandro

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Fast REVERSE() function?

On Mon, Sep 08, 2008 at 11:20:18AM -0400, Chris Browne wrote:
> I've got a case where I need to reverse strings, and find that, oddly
> enough, there isn't a C-based reverse() function.
> A search turns up pl/pgsql and SQL implementations:

just for completenes - there is also pl/perl and c versions freely
available:
http://www.depesz.com/index.php/2007/07/30/indexable-field-like-something/
(pl/perl
)
http://blog.frosties.org/post/2007/08/28/Fonction-reverse-C-avec-PostgreSQL
(c)

Best regards,

depesz

--
Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/
jid/gtalk
: depesz@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] For what should pg_stop_backup wait?

Simon Riggs <simon@2ndquadrant.com> writes:
>> So thinking we should test XLogArchiveCheckDone() for both
>> stopxlogfilename and history file and then stat for the stop WAL file:

> This seems better.

Somehow I missed the 5-Apr patch that introduced this bogosity.
Next time you make a fundamental change in the behavior of a function,
how about updating its comment to match?

After studying it for a minute, I think that XLogArchiveCheckDone no
longer even has a clearly explainable purpose; it needs to be split
into two functions with two different behaviors. I plan to revert
XLogArchiveCheckDone to its previous behavior and introduce
XLogArchiveIsBusy (flipping the sense of the result) to use in
pg_stop_backup.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-de-allgemein] Wer hat Lust PostgreSQL auf der IPC zu vertreten?

Hallo Jungs und Mädels,

immer mehr Anwender wechseln von MySQL zu PostgreSQL im Zusammenhang mit
PHP. Mich haben nun schon drei der PHP Gurus angefleht, es möge doch
jemand von uns auf der Internationalen PHP Konferenz (IPC) auftreten.

Nähere Infos zur Konferenz findet Ihr unter:
http://it-republik.de/php/phpconference/

Ich bräuchte ziemlich schnell eine Antwort, ob dort jemand von Euch
einen Vortrag über etwas nettes mit PostgreSQL halten könnte. Leider
sind die meisten von uns zu der Zeit noch in Italien. Aber die meisten,
sind ja längst nicht alle und es wird hoffentlich jemanden in der
Umgebung von Wiesbaden geben, der an der Konferenz teilnehmen könnte.

Liebe Grüße

Susanne

P.S.: Achso, bei Interesse bitte Email an mich, ich stell dann den
Kontakt her.

--
Susanne Ebrecht
52066 Aachen, Germany

Email: miracee@miracee.de

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [HACKERS] Fast REVERSE() function?

> (Aside: presumably we could walk thru the string destructively,
> in-place, swapping bytes; I think that would be theoretically
> quickest...)

Hmmm... I guess it will not work für UTF-8 or any other multibyte charset

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-fr-generale] Pas de contrainte référentielle vers un champ non-UNIQUE :-(

On Mon, 8 Sep 2008 16:56:35 +0200, Stephane Bortzmeyer <bortzmeyer@nic.fr>
wrote:

Bonjour,

> J'essaie de mettre une contrainte référentielle mais le champ visé n'a
> pas été déclaré comme UNIQUE :
>
> essais=> CREATE TABLE Foo(name TEXT NOT NULL);
> CREATE TABLE
> essais=> CREATE TABLE Bar(truc TEXT, machin TEXT REFERENCES Foo(name));
> ERROR: there is no unique constraint matching given keys for referenced
> table "foo"
>
> [Si Foo(name) est déclaré UNIQUE, cela passe.]
>
> Première question : pourquoi PostgreSQL 8.3 impose t-il cette
> contrainte supplémentaire qui ne me semble pas logique ?

Ce n'est pas Pg 8.3, c'est la définition même de l'intégrité
référentielle¹. C'est dans la doc de PostreSQL depuis au moins la
version 7.3, je ne suis pas remonté plus loin.

>
> Deuxième question : quel contournement utiliser à part abandonner la
> sécurité que me fournit REFERENCES ?
>

Des triggers ?

¹ : http://en.wikipedia.org/wiki/Foreign_key

--
Sébastien


--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

[pgsql-fr-generale] Re: [pgsql-fr-generale] Pas de contrainte référentielle vers un champ non-UNIQUE :-(

Stephane Bortzmeyer a écrit :
> J'essaie de mettre une contrainte référentielle mais le champ visé n'a
> pas été déclaré comme UNIQUE :
>
> essais=> CREATE TABLE Foo(name TEXT NOT NULL);
> CREATE TABLE
> essais=> CREATE TABLE Bar(truc TEXT, machin TEXT REFERENCES Foo(name));
> ERROR: there is no unique constraint matching given keys for referenced table "foo"
>
> [Si Foo(name) est déclaré UNIQUE, cela passe.]
>
> Première question : pourquoi PostgreSQL 8.3 impose t-il cette
> contrainte supplémentaire qui ne me semble pas logique ?
>

Pourquoi contrainte supplémentaire ? une référence, c'est une clé
étrangère. L'idée de la clé étrangère, c'est qu'*un* élément est lié à
*un ou plusieurs* autres. Ce premier élément doit être dissociable. Et
il doit être dissociable pour que les règles ON UPDATE et ON DELETE des
clés étrangères puissent fonctionner.

Mais si je comprends bien, tu veux faire un système où un ou plusieurs
éléments d'un table sont liés à un ou plusieurs éléments d'une table Y
(du n-n à la place du 1-n de PostgreSQL).

> Deuxième question : quel contournement utiliser à part abandonner la
> sécurité que me fournit REFERENCES ?
>

Il va falloir que tu crées un système de triggers qui fera la même
chose, mais qui n'implémentera pas ON UPDATE et ON DELETE.


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

[pgadmin-hackers] SVN Commit by dpage: r7459 - trunk/www/development

Author: dpage

Date: 2008-09-08 16:27:50 +0100 (Mon, 08 Sep 2008)

New Revision: 7459

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7459&view=rev

Log:
Mark off roadmap item

Modified:
trunk/www/development/roadmap.php

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[COMMITTERS] pgsql: Make pg_dump --data-only try to order the table dumps so that

Log Message:
-----------
Make pg_dump --data-only try to order the table dumps so that foreign keys'
referenced tables are dumped before the referencing tables. This avoids
failures when the data is loaded with the FK constraints already active.
If no such ordering is possible because of circular or self-referential
constraints, print a NOTICE to warn the user about it.

Modified Files:
--------------
pgsql/src/bin/pg_dump:
pg_dump.c (r1.499 -> r1.500)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump.c?r1=1.499&r2=1.500)
pg_dump.h (r1.140 -> r1.141)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump.h?r1=1.140&r2=1.141)
pg_dump_sort.c (r1.20 -> r1.21)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump_sort.c?r1=1.20&r2=1.21)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[pgadmin-hackers] SVN Commit by dpage: r7458 - in trunk/pgadmin3: . pgadmin/dlg pgadmin/schema pgadmin/ui

Author: dpage

Date: 2008-09-08 16:23:11 +0100 (Mon, 08 Sep 2008)

New Revision: 7458

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7458&view=rev

Log:
Add support for variadic functions in PostgreSQL 8.4+.


Modified:
trunk/pgadmin3/CHANGELOG
trunk/pgadmin3/pgadmin/dlg/dlgFunction.cpp
trunk/pgadmin3/pgadmin/schema/pgFunction.cpp
trunk/pgadmin3/pgadmin/ui/dlgFunction.xrc
trunk/pgadmin3/pgadmin/ui/xrcDialogs.cpp

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgsql-es-ayuda] Implementar IIf

Yo tengo un par de eBooks sobre PL/pgSql de los que había en http://www.postgresql.cl/ si te interesan me avisas.

Saludos

2008/9/7 ROBWARE RUIZ <robware.ruiz@gmail.com>
Soy nuevo en el mundo PostgreSQL, este es mi primer post, vamos a ver como me va...
Normalmente usaba Access, pero definitivamente para algunas situaciones de gran calado mejor usar PostgreSQL, bueno el cuento es que en Access existe una función IIf(Condición, ValorVerdadero, ValorFalso), ya estudie la documentacion de PostgreSQL y la implementé con el operador CASE WHEN ELSE END, pero estudiando esa documentacion observe que se puede añadir funciones nuevas, con el lenguaje pl/pgsql pero no he encontrado docmentacion de dicho lenguaje, seria posible que alguien me pudiera dar una mano con la implementacion de la funcion IIf y con un manual del lenguaje, preferiblemente en español.

De antemano muchas gracias.

Roberto M. Ruiz S.

--
karkul [Ramón Morales]
Desarrollador de Software
Xalapa, Ver., México

[HACKERS] Fast REVERSE() function?

I've got a case where I need to reverse strings, and find that, oddly
enough, there isn't a C-based reverse() function.

A search turns up pl/pgsql and SQL implementations:

create or replace function reverse_string(text) returns text as $$
DECLARE
reversed_string text;
incoming alias for $1;
BEGIN
reversed_string = '''';
for i in reverse char_length(incoming)..1 loop
reversed_string = reversed_string || substring(incoming from i for 1);
end loop;
return reversed_string;
END $$
language plpgsql;

CREATE OR REPLACE FUNCTION reverse(TEXT) RETURNS TEXT AS $$
SELECT
array_to_string(
ARRAY
( SELECT substring($1, s.i,1) FROM generate_series(length($1), 1, -1) AS s(i) ),
'');
$$ LANGUAGE SQL IMMUTABLE;

Unfortunately, neither is particularly fast. This should be
"blinding-quick" in C, in comparison; reversing a set of bytes should
be able to be done mighty quick!

(Aside: presumably we could walk thru the string destructively,
in-place, swapping bytes; I think that would be theoretically
quickest...)

I could probably add this in as an SPI() function; is there a good
reason to try to avoid doing so?
--
output = reverse("ofni.sesabatadxunil" "@" "enworbbc")
http://www3.sympatico.ca/cbbrowne/sgml.html
"Consistency is the single most important aspect of *ideology.*
Reality is not nearly so consistent." - <cbbrowne@hex.net>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgadmin-hackers] Buglets in dlgFunction

Hi Guillaume

I found a couple of small bugs in the new layout code for dlgFunction:

- The Privileges (and SQL) panes don't size themselves properly when
viewing an existing function, until the dialogue is manually resized
- The privilege checkboxes are touching the role combo box above.

I've attached a screenshot for your amusement :-)

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

Re: [PERFORM] SAN and full_page_writes

On Mon, Sep 8, 2008 at 10:59 AM, Gregory Stark <stark@enterprisedb.com> wrote:

That "media recovery" it's referring to sounds like precisely our WAL full
page writes...


--
 Gregory Stark
 EnterpriseDB          http://www.enterprisedb.com
 Ask me about EnterpriseDB's PostGIS support!

That sounds right.

So the take home from this is that NetApp does its best to protect you from partial page writes but comes up short on untweaked NFS (see doc to tweak.)  Otherwise you are protected so long as your OS and file system implement fsync properly.

--Nik

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Tom Lane <tgl@sss.pgh.pa.us> writes:

> Gregory Stark <stark@enterprisedb.com> writes:
>> Alvaro Herrera <alvherre@commandprompt.com> writes:
>>> It's good as a joke, but what if the user says '1024b'? Does it mean
>>> 1024 blocks or one kilobyte? If blocks, what size are we talking, the
>>> usual 512 bytes, or our BLCKSZ?
>
>> For what guc would you find a unit of posix-style "blocks" relevant?!
>
> The point isn't whether it's relevant, the point is that there's a
> fairly serious doubt as to what the user thought it meant.

My point was that we don't accept any form of "b" today and nobody has
suggested we should.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-fr-generale] Pas de contrainte référentielle vers un champ non-UNIQUE :-(

Cette limitation vient en partie, à mon sens, du fait que les références
sont gérées dans PostgreSQL via des triggers.
Aussi les triggers sont utilisés lors de la suppression/update dans ta
table Foo et lors de l'insertion dans ta table Bar.
Cela voudrait ainsi dire que lorsque tu supprimes une entrée dans Foo,
le trigger doive vérifier la non-existence dans Bar ainsi que, si
existence il y a, la présence d'un doublon dans Foo autorisant la
suppression.
Techniquement c'est possible mais je pense que ça complexifie les
choses.
D'autre part, conceptuellement, cela revient à faire une relation une
relation n to n. Je ne sais pas si c'est très normal de cette façon là.
Quelqu'un sait si c'est possible sur d'autre base de données (genre
oracle, sql server)

Le contournement pour le coup est relativement simple. Il suffit juste
d'écrire des triggers équivalent à ce que fait PostgreSQL avec le test
tel que je l'ai décris en plus un peu plus haut. Regarde du côté des
contrib pour voir si ça n'existe pas déjà... Mais bon c'est plus des
références Postgresql dans ce cas là...


Voila...
Le Mon, 8 Sep 2008 16:56:35 +0200, Stephane
Bortzmeyer <bortzmeyer@nic.fr> a écrit :

> J'essaie de mettre une contrainte référentielle mais le champ visé n'a
> pas été déclaré comme UNIQUE :
>
> essais=> CREATE TABLE Foo(name TEXT NOT NULL);
> CREATE TABLE
> essais=> CREATE TABLE Bar(truc TEXT, machin TEXT REFERENCES
> Foo(name)); ERROR: there is no unique constraint matching given keys
> for referenced table "foo"
>
> [Si Foo(name) est déclaré UNIQUE, cela passe.]
>
> Première question : pourquoi PostgreSQL 8.3 impose t-il cette
> contrainte supplémentaire qui ne me semble pas logique ?
>
> Deuxième question : quel contournement utiliser à part abandonner la
> sécurité que me fournit REFERENCES ?
>

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [HACKERS] Synchronous Log Shipping Replication

Hi,

Fujii Masao wrote:
> 1. A backend writes WAL to disk.
> 2. The backend wakes up WAL sender process and sleeps.
> 3. WAL sender process does WAL shipping and wakes up the backend.
> 4. The backend issues sync command.

Right, that would work. But still, the WAL writer process would block
during writing WAL blocks.

Are there compelling reasons for using the existing WAL writer process,
as opposed to introducing a new process?

> The timing of the process's receiving a signal is dependent on the scheduler
> of kernel.

Sure, so are pipes or shmem queues.

> The scheduler does not always handle a signal immediately.

What exactly are you proposing to use instead of signals? Semaphores are
pretty inconvenient when trying to wake up arbitrary processes or in
conjunction with listening on sockets via select(), for example.

See src/backend/replication/manager.c from Postgres-R for a working
implementation of such a process using select() and signaling.

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] Common Table Expressions (WITH RECURSIVE) patch

These are my initial comments about the Common Table Expressions (CTE)
patch, also known as WITH [RECURSIVE]. These comments are based on the
patch here:

http://archives.postgresql.org/pgsql-patches/2008-08/msg00021.php

This is a semantically complex feature, and the standard is fairly
complex as well. So I'm approaching this by making my own
interpretations from the standard first (I included my interpretations
and section references at the end of this email) and comparing to the
behavior of the patch.

The following examples may be inconsistent with the standard. Some
have already been mentioned, and I don't think they all need to be
fixed for 8.4, but I mention them here for completeness.

* Mutual Recursion:

with recursive
foo(i) as (values(1) union all select i+1 from bar where i < 10),
bar(i) as (values(1) union all select i+1 from foo where i < 10)
select * from foo;
ERROR: mutual recursive call is not supported

The standard allows mutual recursion.

* Single Evaluation:

with
foo(i) as (select random() as i)
select * from foo union all select * from foo;
i
-------------------
0.233165248762816
0.62126633618027
(2 rows)

The standard specifies that non-recursive WITH should be evaluated
once.

* RHS only:

with recursive
foo(i) as (select i+1 from foo where i < 10 union all values(1))
select * from foo;
ERROR: Left hand side of UNION ALL must be a non-recursive term in a
recursive query

The standard does not require that the recursive term be on the RHS.

* UNION ALL only:

with recursive
foo(i) as (values(1) union select i+1 from foo where i < 10)
select * from foo;
ERROR: non-recursive term and recursive term must be combined with
UNION ALL

The standard seems to allow UNION ALL, UNION, INTERSECT, and EXCEPT
(when the recursive term is not on the RHS of the EXCEPT).

* Binary recursion and subselect strangeness:

with recursive foo(i) as
(values (1)
union all
select * from
(select i+1 from foo where i < 10
union all
select i+1 from foo where i < X) t)
select * from foo;

Produces 10 rows of output regardless of what "X" is. This should be
fixed for 8.4.
Also, this is non-linear recursion, which the standard seems to
disallow.

* Multiple recursive references:

with recursive foo(i) as
(values (1)
union all
select i+1 from foo where i < 10
union all
select i+1 from foo where i < 20)
select * from foo;
ERROR: Left hand side of UNION ALL must be a non-recursive term in a
recursive query

If we're going to allow non-linear recursion (which the standard
does not), this seems like it should be a valid case.

* Strange result with except:

with recursive foo(i) as
(values (1)
union all
select * from
(select i+1 from foo where i < 10
except
select i+1 from foo where i < 5) t)
select * from foo;
ERROR: table "foo" has 0 columns available but 1 columns specified

This query works if you replace "except" with "union". This should be
fixed for 8.4.

* Aggregates allowed:

with recursive foo(i) as
(values(1)
union all
select max(i)+1 from foo where i < 10)
select * from foo;

Aggregates should be blocked according to the standard.
Also, causes an infinite loop. This should be fixed for 8.4.

* DISTINCT should supress duplicates:

with recursive foo(i) as
(select distinct * from (values(1),(2)) t
union all
select distinct i+1 from foo where i < 10)
select * from foo;

This outputs a lot of duplicates, but they should be supressed
according to the standard. This query is essentially the same as
supporting UNION for recursive queries, so we should either fix both for
8.4 or block both for consistency.

* outer joins on a recursive reference should be blocked:

with recursive foo(i) as
(values(1)
union all
select i+1 from foo left join (values(1)) t on (i=column1))
select * from foo;

Causes an infinite loop, but the standard says using an outer join
in this situation should be prohibited. This should be fixed for 8.4.

* ORDER BY, LIMIT, and OFFSET are rejected for recursive queries. The
standard does not seem to say that these should be rejected.


The following are my interpretations of relevant parts of the SQL
standard (200n), and the associated sections. These are only my
interpretation, so let me know if you interpret the standard
differently.

Non-linear recursion forbidden:
7.13: Syntax Rules: 2.g.ii
My interpretation of 2.g.ii.2 is that WQN[k] and WQN[l] may be the
same <query name>.
7.13: Syntax Rules: 2.g.iv

EXCEPT can't be used for recursive queries if a recursive reference
appears on the RHS:
7.13: Syntax Rules: 2.g.iii.1

INTERSECT ALL/EXCEPT ALL can't be used for recursive queries:
7.13: Syntax Rules: 2.g.iii.5

recursive references must appear in FROM clause:
7.13: Syntax Rules: 2.g.iii.3
My interpretation of this rule is that it does not allow a
recursive reference in a subquery in the targetlist or a subquery
in the where clause.

stratum defined:
7.13: Syntax Rules: 2.f
7.13: Syntax Rules: 2.g.i.1

recursive query must have anchor for every stratum:
7.13: Syntax Rules: 2.g.i.3

outer joins not allowed to join recursive references:
7.13: Syntax Rules: 2.g.iii.6
7.13: Syntax Rules: 2.g.iii.7

Aggregates/HAVING disallowed:
7.13: Syntax Rules: 2.g.iii.4.B

Mutual recursion defined:
7.13: Syntax Rules: 2.g.i.1

Evaluate each WITH entry once, even if it's referenced multiple times:
7.13: General Rules: 1
7.13: General Rules: 2.b
See also:
http://archives.postgresql.org/pgsql-hackers/2008-07/msg01292.php

Evaluation order with mutual recursion:
7.13: General Rules: 2.a
7.13: General Rules: 2.b

Evaluation semantics:
7.13: General Rules: 2.c

DISTINCT:
7.13: General Rules: 2.c.iv
7.13: General Rules: 2.c.ix.3.A

I will provide comments about the code and documentation soon. This is a
very useful feature.

Regards,
Jeff Davis


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Gregory Stark <stark@enterprisedb.com> writes:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
>> It's good as a joke, but what if the user says '1024b'? Does it mean
>> 1024 blocks or one kilobyte? If blocks, what size are we talking, the
>> usual 512 bytes, or our BLCKSZ?

> For what guc would you find a unit of posix-style "blocks" relevant?!

The point isn't whether it's relevant, the point is that there's a
fairly serious doubt as to what the user thought it meant.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] SAN and full_page_writes

"Nikolas Everett" <nik9000@gmail.com> writes:

> Sounds kind of scary. I think postgres forces the underlying OS and file
> system to do that stuff (sans the mutli-host magic) using fsync. Is that
> right?

Yes, so you have to make sure that your filesystem really does support fsync
properly. I think most NFS implementations do that.

I was more concerned with:

Network Appliance supports a number of NFS client implementations for use
with databases. These clients provide write atomicity to at least 4 KB,
and support synchronous writes when requested by the database. Typically,
atomicity is guaranteed only to one virtual memory page, which may be as
small as 4 KB. However, if the NFS client supports a direct I/O mode that
completely bypasses the cache, then atomicity is guaranteed to the size
specified by the "wsize" mount option, typically 32 KB.

The failure of some NFS clients to assure write atomicity to a full
database block means that the soft atomicity requirement is not always
met. Some failures of the host system may result in a fractured database
block on disk. In practice such failures are rare. When they happen no
data is lost, but media recovery of the affected database block may be
required

That "media recovery" it's referring to sounds like precisely our WAL full
page writes...


--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[pgsql-fr-generale] Pas de contrainte référentielle vers un champ non-UNIQUE :-(

J'essaie de mettre une contrainte référentielle mais le champ visé n'a
pas été déclaré comme UNIQUE :

essais=> CREATE TABLE Foo(name TEXT NOT NULL);
CREATE TABLE
essais=> CREATE TABLE Bar(truc TEXT, machin TEXT REFERENCES Foo(name));
ERROR: there is no unique constraint matching given keys for referenced table "foo"

[Si Foo(name) est déclaré UNIQUE, cela passe.]

Première question : pourquoi PostgreSQL 8.3 impose t-il cette
contrainte supplémentaire qui ne me semble pas logique ?

Deuxième question : quel contournement utiliser à part abandonner la
sécurité que me fournit REFERENCES ?

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [PATCHES] hash index improving v3

Zdenek Kotala <Zdenek.Kotala@Sun.COM> writes:
> I attach cleanup patch which we discussed last commit fest. It introduce new
> macros HashGetMetaPage and HashGetBitmap and of course, it break backward on
> disk format compatibility which we will break it anyway when Xiao's patch will
> be committed.

If we accept that patch, I'm okay with including this one too. Still
hoping for some performance testing though. (Note that this one isn't
going to affect performance, so no need to include it for that purpose.)

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PERFORM] SAN and full_page_writes

Sorry about that.  I was having tunnel vision and pulled out the part that applied to me.  I also figured that the OS and file system information was superfluous but on second look it may not be.  This bit:

To satisfy the Durability requirement, all write operations must write through any OS
cache to stable storage before they are reported as complete or otherwise made visible.
Write-back caching behavior is prohibited, and data from failed writes must not appear in
an OS cache.
To satisfy the Serialization requirements, any OS cache must be fully coherent with the
underlying storage. For instance, each write must invalidate any OS-cached copies of
the data to be overwritten, on any and all hosts, prior to commitment. Multiple hosts may
access the same storage concurrently under shared-disk clustering, such as that
implemented by Oracle RAC and/or ASM.
Sounds kind of scary.  I think postgres forces the underlying OS and file system to do that stuff (sans the mutli-host magic) using fsync.  Is that right?

It does look like there are some gotchas with NFS.

On Mon, Sep 8, 2008 at 10:16 AM, Gregory Stark <stark@enterprisedb.com> wrote:

"Nikolas Everett" <nik9000@gmail.com> writes:

> Thanks for pointing that out Bruce.
>
> NetApp has a 6 page PDF about NetApp and databases.  On page 4:

Skimming through this I think all 6 pages are critical. The sentence you quote
out of context pertains specifically to the NAS internal organization.

The previous pages discuss limitations of OSes, filesystems and especially NFS
clients which you may have to be concerned with as well.

--
 Gregory Stark
 EnterpriseDB          http://www.enterprisedb.com
 Ask me about EnterpriseDB's Slony Replication support!

Re: [PATCHES] [PgFoundry] Unsigned Data Types [1 of 2]

Hello Jaime,

> the same problem happens in joins, unions, hash, etc... so you have to
> look at those functions as well

Great! Added to the list to check. I am planning to build regression tests
for these types to catch these errors in the future. Thanks again for your
testing and review!

> PS: Jaime not Jamie :)

Sorry! I will spell your name correctly from now on!

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Alvaro Herrera <alvherre@commandprompt.com> writes:

> It's good as a joke, but what if the user says '1024b'? Does it mean
> 1024 blocks or one kilobyte? If blocks, what size are we talking, the
> usual 512 bytes, or our BLCKSZ?

For what guc would you find a unit of posix-style "blocks" relevant?!

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Gregory Stark wrote:
> "Greg Sabino Mullane" <greg@turnstep.com> writes:
>
> > Tom Lane wrote:
> >> My vote is to reject the patch and work on a config checker.
> >
> > +1
> >
> >> postgres=# set work_mem = '1g';
> >> ERROR: invalid value for parameter "work_mem": "1g"
> >
> > Perhaps this would be a great place for a HINT listing all
> > valid inputs, if not there already?
>
> It is, I paraphrased it on my original message as:
>
> HINT: It's perfectly clear what you want but I'm going to refuse to do
> it until you type it exactly as I say: "GB"

It's good as a joke, but what if the user says '1024b'? Does it mean
1024 blocks or one kilobyte? If blocks, what size are we talking, the
usual 512 bytes, or our BLCKSZ?

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Organizacion del PSDP-es

On Mon, Sep 8, 2008 at 7:46 AM, Luis Guevara <luis.a.guevara@gmail.com> wrote:
>>
>> - el que mucho aprieta, poco abarca
>
> Jaime:
>
> ¿ No era "El que mucho abarca poco aprieta" ? O quizá es una nueva
> versión.....:-)
>

eso me pasa por hablador...


--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 5: ¿Has leído nuestro extenso FAQ?
http://www.postgresql.org/docs/faqs.FAQ.html

Re: [HACKERS] [PATCH] Cleanup of GUC units code

"Greg Sabino Mullane" <greg@turnstep.com> writes:

> Tom Lane wrote:
>> My vote is to reject the patch and work on a config checker.
>
> +1
>
>> postgres=# set work_mem = '1g';
>> ERROR: invalid value for parameter "work_mem": "1g"
>
> Perhaps this would be a great place for a HINT listing all
> valid inputs, if not there already?

It is, I paraphrased it on my original message as:

HINT: It's perfectly clear what you want but I'm going to refuse to do
it until you type it exactly as I say: "GB"


--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] npgsql - Npgsql2: Added a test for bug 1006158

Log Message:
-----------

Added a test for bug 1006158

Modified Files:
--------------
Npgsql2/testsuite/noninteractive/NUnit20:
CommandTests.cs (r1.14 -> r1.15)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/testsuite/noninteractive/NUnit20/CommandTests.cs.diff?r1=1.14&r2=1.15)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: [1010426] Broken Connections.

Log Message:
-----------

[1010426] Broken Connections. Added some code to help minimize those problems. Now Npgsql can handle threadabort exceptions and try to clean up things.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlState.cs (r1.13 -> r1.14)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlState.cs.diff?r1=1.13&r2=1.14)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [PATCHES] hash index improving v3

běžné podadresáře: pgsql_hash.be486df40421/src a pgsql_hash/src
běžné podadresáře: pgsql_hash.be486df40421/src/backend a pgsql_hash/src/backend
běžné podadresáře: pgsql_hash.be486df40421/src/include a pgsql_hash/src/include
běžné podadresáře: pgsql_hash.be486df40421/src/backend/access a pgsql_hash/src/backend/access
běžné podadresáře: pgsql_hash.be486df40421/src/backend/access/hash a pgsql_hash/src/backend/access/hash
diff -rc pgsql_hash.be486df40421/src/backend/access/hash/hash.c pgsql_hash/src/backend/access/hash/hash.c
*** pgsql_hash.be486df40421/src/backend/access/hash/hash.c po zář 8 16:14:00 2008
--- pgsql_hash/src/backend/access/hash/hash.c po zář 8 16:14:00 2008
***************
*** 527,533 ****
* each bucket.
*/
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);
orig_maxbucket = metap->hashm_maxbucket;
orig_ntuples = metap->hashm_ntuples;
memcpy(&local_metapage, metap, sizeof(local_metapage));
--- 527,533 ----
* each bucket.
*/
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));
orig_maxbucket = metap->hashm_maxbucket;
orig_ntuples = metap->hashm_ntuples;
memcpy(&local_metapage, metap, sizeof(local_metapage));
***************
*** 629,635 ****

/* Write-lock metapage and check for split since we started */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_WRITE, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);

if (cur_maxbucket != metap->hashm_maxbucket)
{
--- 629,635 ----

/* Write-lock metapage and check for split since we started */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_WRITE, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));

if (cur_maxbucket != metap->hashm_maxbucket)
{
diff -rc pgsql_hash.be486df40421/src/backend/access/hash/hashinsert.c pgsql_hash/src/backend/access/hash/hashinsert.c
*** pgsql_hash.be486df40421/src/backend/access/hash/hashinsert.c po zář 8 16:14:00 2008
--- pgsql_hash/src/backend/access/hash/hashinsert.c po zář 8 16:14:00 2008
***************
*** 69,75 ****

/* Read the metapage */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);

/*
* Check whether the item can fit on a hash page at all. (Eventually, we
--- 69,75 ----

/* Read the metapage */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));

/*
* Check whether the item can fit on a hash page at all. (Eventually, we
diff -rc pgsql_hash.be486df40421/src/backend/access/hash/hashovfl.c pgsql_hash/src/backend/access/hash/hashovfl.c
*** pgsql_hash.be486df40421/src/backend/access/hash/hashovfl.c po zář 8 16:14:00 2008
--- pgsql_hash/src/backend/access/hash/hashovfl.c po zář 8 16:14:00 2008
***************
*** 187,193 ****
_hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);

_hash_checkpage(rel, metabuf, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);

/* start search at hashm_firstfree */
orig_firstfree = metap->hashm_firstfree;
--- 187,193 ----
_hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);

_hash_checkpage(rel, metabuf, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));

/* start search at hashm_firstfree */
orig_firstfree = metap->hashm_firstfree;
***************
*** 450,456 ****

/* Read the metapage so we can determine which bitmap page to use */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);

/* Identify which bit to set */
ovflbitno = blkno_to_bitno(metap, ovflblkno);
--- 450,456 ----

/* Read the metapage so we can determine which bitmap page to use */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));

/* Identify which bit to set */
ovflbitno = blkno_to_bitno(metap, ovflblkno);
diff -rc pgsql_hash.be486df40421/src/backend/access/hash/hashpage.c pgsql_hash/src/backend/access/hash/hashpage.c
*** pgsql_hash.be486df40421/src/backend/access/hash/hashpage.c po zář 8 16:14:00 2008
--- pgsql_hash/src/backend/access/hash/hashpage.c po zář 8 16:14:00 2008
***************
*** 395,401 ****
pageopaque->hasho_flag = LH_META_PAGE;
pageopaque->hasho_page_id = HASHO_PAGE_ID;

! metap = (HashMetaPage) pg;

metap->hashm_magic = HASH_MAGIC;
metap->hashm_version = HASH_VERSION;
--- 395,401 ----
pageopaque->hasho_flag = LH_META_PAGE;
pageopaque->hasho_page_id = HASHO_PAGE_ID;

! metap = HashPageGetMeta(pg);

metap->hashm_magic = HASH_MAGIC;
metap->hashm_version = HASH_VERSION;
***************
*** 402,414 ****
metap->hashm_ntuples = 0;
metap->hashm_nmaps = 0;
metap->hashm_ffactor = ffactor;
! metap->hashm_bsize = BufferGetPageSize(metabuf);
/* find largest bitmap array size that will fit in page size */
for (i = _hash_log2(metap->hashm_bsize); i > 0; --i)
{
! if ((1 << i) <= (metap->hashm_bsize -
! (MAXALIGN(sizeof(PageHeaderData)) +
! MAXALIGN(sizeof(HashPageOpaqueData)))))
break;
}
Assert(i > 0);
--- 402,412 ----
metap->hashm_ntuples = 0;
metap->hashm_nmaps = 0;
metap->hashm_ffactor = ffactor;
! metap->hashm_bsize = HashGetMaxBitmapSize(pg);
/* find largest bitmap array size that will fit in page size */
for (i = _hash_log2(metap->hashm_bsize); i > 0; --i)
{
! if ((1 << i) <= metap->hashm_bsize)
break;
}
Assert(i > 0);
***************
*** 532,538 ****
_hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);

_hash_checkpage(rel, metabuf, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);

/*
* Check to see if split is still needed; someone else might have already
--- 530,536 ----
_hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);

_hash_checkpage(rel, metabuf, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));

/*
* Check to see if split is still needed; someone else might have already
diff -rc pgsql_hash.be486df40421/src/backend/access/hash/hashsearch.c pgsql_hash/src/backend/access/hash/hashsearch.c
*** pgsql_hash.be486df40421/src/backend/access/hash/hashsearch.c po zář 8 16:14:00 2008
--- pgsql_hash/src/backend/access/hash/hashsearch.c po zář 8 16:14:00 2008
***************
*** 186,192 ****

/* Read the metapage */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = (HashMetaPage) BufferGetPage(metabuf);

/*
* Compute the target bucket number, and convert to block number.
--- 186,192 ----

/* Read the metapage */
metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);
! metap = HashPageGetMeta(BufferGetPage(metabuf));

/*
* Compute the target bucket number, and convert to block number.
diff -rc pgsql_hash.be486df40421/src/backend/access/hash/hashutil.c pgsql_hash/src/backend/access/hash/hashutil.c
*** pgsql_hash.be486df40421/src/backend/access/hash/hashutil.c po zář 8 16:14:00 2008
--- pgsql_hash/src/backend/access/hash/hashutil.c po zář 8 16:14:00 2008
***************
*** 190,196 ****
*/
if (flags == LH_META_PAGE)
{
! HashMetaPage metap = (HashMetaPage) page;

if (metap->hashm_magic != HASH_MAGIC)
ereport(ERROR,
--- 190,196 ----
*/
if (flags == LH_META_PAGE)
{
! HashMetaPage metap = HashPageGetMeta(page);

if (metap->hashm_magic != HASH_MAGIC)
ereport(ERROR,
běžné podadresáře: pgsql_hash.be486df40421/src/include/access a pgsql_hash/src/include/access
diff -rc pgsql_hash.be486df40421/src/include/access/hash.h pgsql_hash/src/include/access/hash.h
*** pgsql_hash.be486df40421/src/include/access/hash.h po zář 8 16:14:00 2008
--- pgsql_hash/src/include/access/hash.h po zář 8 16:14:00 2008
***************
*** 138,144 ****

typedef struct HashMetaPageData
{
- PageHeaderData hashm_phdr; /* pad for page header (do not use) */
uint32 hashm_magic; /* magic no. for hash tables */
uint32 hashm_version; /* version ID */
double hashm_ntuples; /* number of tuples stored in the table */
--- 138,143 ----
***************
*** 191,199 ****
#define BMPGSZ_BIT(metap) ((metap)->hashm_bmsize << BYTE_TO_BIT)
#define BMPG_SHIFT(metap) ((metap)->hashm_bmshift)
#define BMPG_MASK(metap) (BMPGSZ_BIT(metap) - 1)
- #define HashPageGetBitmap(pg) \
- ((uint32 *) (((char *) (pg)) + MAXALIGN(sizeof(PageHeaderData))))

/*
* The number of bits in an ovflpage bitmap word.
*/
--- 190,207 ----
#define BMPGSZ_BIT(metap) ((metap)->hashm_bmsize << BYTE_TO_BIT)
#define BMPG_SHIFT(metap) ((metap)->hashm_bmshift)
#define BMPG_MASK(metap) (BMPGSZ_BIT(metap) - 1)

+ #define HashPageGetBitmap(page) \
+ ((uint32*) PageGetContents(page))
+
+ #define HashGetMaxBitmapSize(page) \
+ (PageGetPageSize((Page) page)- \
+ (MAXALIGN(SizeOfPageHeaderData) + \
+ MAXALIGN(sizeof(HashPageOpaqueData))) )
+
+ #define HashPageGetMeta(page) \
+ ((HashMetaPage) PageGetContents(page))
+
/*
* The number of bits in an ovflpage bitmap word.
*/
Tom Lane napsal(a):
> "Alex Hunsaker" <badalex@gmail.com> writes:
>> On Fri, Sep 5, 2008 at 2:21 PM, Alex Hunsaker <badalex@gmail.com> wrote:
>>> Ok now that I made it so it actually *test* collisions, with the patch
>>> it always returns all rows that matched the hashed "key".
>
>> And here is the fix, we just forget to set the recheck flag for bitmap scans.
>
> For the convenience of anyone intending to test, here is an updated
> patch against CVS HEAD that incorporates Alex's fix.
>

Tom,
I attach cleanup patch which we discussed last commit fest. It introduce new
macros HashGetMetaPage and HashGetBitmap and of course, it break backward on
disk format compatibility which we will break it anyway when Xiao's patch will
be committed.

Zdenek

--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Greg Sabino Mullane wrote:

> Tom Lane wrote:

> > postgres=# set work_mem = '1g';
> > ERROR: invalid value for parameter "work_mem": "1g"
>
> Perhaps this would be a great place for a HINT listing all
> valid inputs, if not there already?

alvherre=# set work_mem = '1g';
ERROR: invalid value for parameter "work_mem": "1g"
HINT: Valid units for this parameter are "kB", "MB", and "GB".


--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] SAN and full_page_writes

"Nikolas Everett" <nik9000@gmail.com> writes:

> Thanks for pointing that out Bruce.
>
> NetApp has a 6 page PDF about NetApp and databases. On page 4:

Skimming through this I think all 6 pages are critical. The sentence you quote
out of context pertains specifically to the NAS internal organization.

The previous pages discuss limitations of OSes, filesystems and especially NFS
clients which you may have to be concerned with as well.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] [PATCH] Cleanup of GUC units code

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


Tom Lane wrote:
> My vote is to reject the patch and work on a config checker.

+1

> postgres=# set work_mem = '1g';
> ERROR: invalid value for parameter "work_mem": "1g"

Perhaps this would be a great place for a HINT listing all
valid inputs, if not there already?

- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation
PGP Key: 0x14964AC8 200809081014
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkjFM2AACgkQvJuQZxSWSsiDvACdE6Wlrnu3uQH4mOpuEMvX0VQe
rXoAoPLCR5jKTWQH4GsHDtz5NNZXq4vA
=nRMS
-----END PGP SIGNATURE-----

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] problem with pg_stat_activity view

That explains it... I thought it was a precent encoding but at the same
time a strange place for a control character.

Regards,

Ben Kim


On Mon, 8 Sep 2008, Alvaro Herrera wrote:

>
>>> ERROR: invalid input syntax for type inet:
>>> "fe80::104d:416e:a8dc:c02e%12"
>>>
>>> SQL state: 22P02
>
> Interesting.
>
> When we HAVE_IPV6, there's code to remove stuff after a % in an address.
> My bet is that what you're running was not compiled with HAVE_IPV6 and
> so the code is not executed. Obviously this is invoked in
> pg_stat_get_backend_client_addr().
>
> /*
> * clean_ipv6_addr --- remove any '%zone' part from an IPv6 address string
> *
> * XXX This should go away someday!
> *
> * This is a kluge needed because we don't yet support zones in stored inet
> * values. Since the result of getnameinfo() might include a zone spec,
> * call this to remove it anywhere we want to feed getnameinfo's output to
> * network_in. Beats failing entirely.
> *
> * An alternative approach would be to let network_in ignore %-parts for
> * itself, but that would mean we'd silently drop zone specs in user input,
> * which seems not such a good idea.
> */
> void
> clean_ipv6_addr(int addr_family, char *addr)
> {
> #ifdef HAVE_IPV6
> if (addr_family == AF_INET6)
> {
> char *pct = strchr(addr, '%');
>
> if (pct)
> *pct = '\0';
> }
>

Re: [ADMIN] problem with pg_stat_activity view

Alvaro Herrera <alvherre@commandprompt.com> writes:
> ERROR: invalid input syntax for type inet:
> "fe80::104d:416e:a8dc:c02e%12"
>
> SQL state: 22P02

> Interesting.

> When we HAVE_IPV6, there's code to remove stuff after a % in an address.

We do now ... it wasn't there in 8.2.0.

http://archives.postgresql.org/pgsql-committers/2007-05/msg00268.php

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] [PATCH] Cleanup of GUC units code

"Greg Stark" <stark@enterprisedb.com> writes:
> On Mon, Sep 8, 2008 at 2:11 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> But of course case insensitivity isn't going to fix that example for you.
>> So we're right back at the question of where we should draw the line in
>> trying to accept variant input.

> Well it's not a perfect precedent but for example, dd accepts:

> G (2^30)
> M (2^20)
> k (2^10)
> K (2^10)
> Kb (10^3)
> MB (10^6)
> GB (10^9)
> b (512)

Hmm. I could get behind a proposal to allow single-letter abbreviations
if it could be made to work across the board, but what about the time
units? "ms" vs "min" is the sticky part there...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] reducing statistics write overhead

Martin Pihlak <martin.pihlak@gmail.com> writes:
> Attached is a patch which adds a timestamp to pgstat.stat file header,
> backend_read_statsfile uses this to determine if the file is fresh.
> During the wait loop, the stats request message is retransmitted to
> compensate for possible loss of message(s).

> The collector only writes the file mostly at PGSTAT_STAT_INTERVAL frequency,
> currently no extra custom timeouts can be passed in the message. This can
> of course be added if need arises.

Hmm. With the timestamp in the file, ISTM that we could put all the
intelligence on the reader side. Reader checks file, sends message if
it's too stale. The collector just writes when it's told to, no
filtering. In this design, rate limiting relies on the readers to not
be unreasonable in how old a file they'll accept; and there's no problem
with different readers having different requirements.

A possible problem with this idea is that two readers might send request
messages at about the same time, resulting in two writes where there
need have been only one. However I think that could be fixed if we add
a little more information to the request messages and have the collector
remember the file timestamp it last wrote out. There are various ways
you could design it but what comes to mind here is for readers to send
a timestamp defined as minimum acceptable file timestamp (ie, their
current clock time less whatever slop they want to allow). Then,
when the collector gets a request with that timestamp <= its last
write timestamp, it knows it need not do anything.

With signaling like that, there's no excess writes *and* no delay in
responding to a new live write request. It's sort of annoying that
the readers have to sleep for an arbitrary interval though. If we could
get rid of that part...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] SAN and full_page_writes

Thanks for pointing that out Bruce.

NetApp has a 6 page PDF about NetApp and databases.  On page 4:

As discussed above, reads and writes are unconditionally atomic to 64 KB. While reads or writes
may fail for a number of reasons (out of space, permissions, etc.), the failure is always atomic to
64 KB. All possible error conditions are fully evaluated prior to committing any updates or
returning any data to the database.

From the sound of it, I can turn of full_page_writes.

This document can be found at http://www.netapp.com/us/ by searching for hosting databases.

Thanks,

--Nik

On Sat, Sep 6, 2008 at 3:46 PM, Bruce Momjian <bruce@momjian.us> wrote:
Nikolas Everett wrote:
> I seem to have answered my own question.  I'm sending the answer to the list
> in case someone else has the same question one day.
>
> According to the NetApp documentation, it does protect me from partial page
> writes.  Thus, full_page_writes = off.

Just for clarification, the NetApp must guarantee that the entire 8k
gets to disk, not just one of the 512-byte blocks that disks use
internally.

--
 Bruce Momjian  <bruce@momjian.us>        http://momjian.us
 EnterpriseDB                             http://enterprisedb.com

 + If your life is a hard drive, Christ can be your backup. +

[COMMITTERS] npgsql - Npgsql2: Uncommented out the code which checks if a connection

Log Message:
-----------

Uncommented out the code which checks if a connection is valid when returning from pool.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlConnector.cs (r1.20 -> r1.21)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlConnector.cs.diff?r1=1.20&r2=1.21)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: [1010466] Npgsql isn't using pool when executing

Log Message:
-----------

[1010466] Npgsql isn't using pool when executing under Mono.

Also, changes the way Npgsql checks if the connection is ok when returning from pool. Previous code was a little bit crypt and also was less performant as it used a new generated guid and listen and notification queries.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlConnectionStringBuilder.cs (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlConnectionStringBuilder.cs.diff?r1=1.6&r2=1.7)
NpgsqlConnectorPool.cs (r1.12 -> r1.13)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlConnectorPool.cs.diff?r1=1.12&r2=1.13)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [BUGS] BUG #4397: crash in tab-complete.c

Problem solved!

The was a left over readline.h which caused the miscompile process :(
Thanks a lot for all your help !

Cheers,

Rudi

Am 08.09.2008 um 15:03 schrieb Tom Lane:

> Rudolf Leitgeb <r.leitgeb@x-pin.com> writes:
>> Sorry for the delay, here are the compile warnings
>> I got when I compiled tab-complete.c for x86_64:
>
>> =========================================================
>> tab-complete.c: In function 'psql_completion':
>> tab-complete.c:601: warning: implicit declaration of function
>> 'completion_matches'
>> tab-complete.c:601: warning: assignment makes pointer from integer
>> without a cast
>
> Well, that nicely explains a crash in a 64-bit build, so the question
> is what happened to your declaration of completion_matches(). It's
> certainly there on my Mac:
>
> tgl$ grep completion_matches /usr/include/readline/readline.h
> char **completion_matches(const char *, CPFunction *);
> $
>
> I'm still suspicious that you've got something bogus under
> /usr/local/include that is masking the correct include file.
>
> regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] Synchronous Log Shipping Replication

On Mon, Sep 8, 2008 at 8:44 PM, Markus Wanner <markus@bluegap.ch> wrote:
>> Merge into WAL writer?
>
> Uh.. that would mean you'd loose parallelism between WAL writing to disk and
> WAL shipping via network. That does not sound appealing to me.

That depends on the order of WAL writing and WAL shipping.
How about the following order?

1. A backend writes WAL to disk.
2. The backend wakes up WAL sender process and sleeps.
3. WAL sender process does WAL shipping and wakes up the backend.
4. The backend issues sync command.

>> I guess we could invent a new semaphore-like primitive at the same layer
>> as LWLocks using spinlock and PGPROC directly...
>
> Sure, but in what way would that differ from what I do with imessages?

Performance ;)

The timing of the process's receiving a signal is dependent on the scheduler
of kernel. The scheduler does not always handle a signal immediately.

Regards

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] problem with pg_stat_activity view

>> ERROR: invalid input syntax for type inet:
>> "fe80::104d:416e:a8dc:c02e%12"
>>
>> SQL state: 22P02

Interesting.

When we HAVE_IPV6, there's code to remove stuff after a % in an address.
My bet is that what you're running was not compiled with HAVE_IPV6 and
so the code is not executed. Obviously this is invoked in
pg_stat_get_backend_client_addr().

/*
* clean_ipv6_addr --- remove any '%zone' part from an IPv6 address string
*
* XXX This should go away someday!
*
* This is a kluge needed because we don't yet support zones in stored inet
* values. Since the result of getnameinfo() might include a zone spec,
* call this to remove it anywhere we want to feed getnameinfo's output to
* network_in. Beats failing entirely.
*
* An alternative approach would be to let network_in ignore %-parts for
* itself, but that would mean we'd silently drop zone specs in user input,
* which seems not such a good idea.
*/
void
clean_ipv6_addr(int addr_family, char *addr)
{
#ifdef HAVE_IPV6
if (addr_family == AF_INET6)
{
char *pct = strchr(addr, '%');

if (pct)
*pct = '\0';
}

Re: [pgsql-ru-general] server process (PID 1234) was terminated by signal 9: Killed

On Mon, Sep 8, 2008 at 12:32 PM, Vladimir Rusinov <vladimir@greenmice.info> wrote:
...
Настораживает что 'was terminated by signal 9: Killed'. Это нормальное поведение?

При этом с db-машины нету коннектов, помимо коннекта от pg_dump, все остальные коннекты - с других хостов от jboss и tomcat, используется hibernate.
...

Абсолютно ненормальное (так можно повредить данные), надо разбираться, в чём причина, возможно банально кто-то kill -9 процессу сделал. "Странное поведение бэкапа" -- я бы так не сказал, бэкап не делает kill -9, это не связанные события :-) Подозреваю всё-таки ручное убиение.

А как часто такое бывает?


--
Sincerely yours,
Nikolay Samokhvalov
Postgresmen LLC, http://postgresmen.ru