Monday, July 21, 2008

Re: [JDBC] issue with select IN (?) query

Hello Arun,

Arun ViswanathanChandrika schrieb:



        pm = con.prepareStatement("SELECT EMP_NAME FROM EMP where EMP_ID IN (?) ");
        pm.setString(1, "2,5,7");        //created many employees and id with 2, 5 and 7

Resolves to: SELECT EMP_NAME FROM EMP where EMP_ID IN ('2,5,7');

Watch the "'"'s :)

With best regards,
Daniel Migowski

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

Greg Smith wrote:
> CFQ/Deadline/AS are I/O scheduler choices. What changed completely in
> 2.6.23 is the kernel process scheduler.
> http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt
> gives some info about the new one.
>
> While the switch to CFS has shown great improvements in terms of
> desktop and many server workloads, what I discovered is that the
> pgbench test program itself is really incompatible with it. There's a
> kernel patch that seems to fix the problem at
> http://lkml.org/lkml/2008/5/27/58 but I don't think it's made it into
> a release yet.
>
> This is not to say the kernel itself is unsuitable for running
> PostgreSQL itself, but if you're using pgbench as the program to
> confirm that I expect you'll be dissapointed with results under the
> Ubuntu 8.04 kernel. It tops out at around 10,000 TPS running the
> select-only test for me while older kernels did 3X that much.

ok, thanks for all the details. good to know.

> Stop and think about this for a minute. You're going into production
> with an older version having a set of known, impossible to work around
> issues that if you hit them the response will be "upgrade to 8.3 to
> fix that", which will require the major disruption to your application
> of a database dump and reload at that point if that fix becomes
> critical. And you can't just do that now because of some packaging
> issues? I hope you can impress upon the other people involved how
> incredibly short-sighted that is.

I understand what you're saying. However if I were to play devil's
advocate, the existing one that I'm 'migrating' (read entirely changing
schemas, 'migrating' data) is coming out from a 8.1.11 install. It is
not a critical system. The source data is always available from another
system and the postgresql system would be a 'client'. So if 8.2.x is so
abysmal it should not even be considered for install compared to 8.1.x
and that only 8.3.x is viable then ok that makes sense and I have to go
the extra mile.

But message received loud and clear. Conveniently 8.3.3 is also
available on backports so it does not cost much and pinning it will be
and pinning it is right now. (don't think there will be any pb with plr,
even though the original seems to be patched a bit, but that will be for
later when I don't know what to do and that all is ready).

For the sake of completeness (even though irrelevant), here's the run
with 32 clients on 8.3 same config as before (except max_fsm_pages at
204800)

1 19 36292
100 1499 32127
200 2994 30679
300 4489 29673
400 5985 18627
500 7480 19714
600 8975 19437
700 10000 20271
800 12000 18038
900 13000 9842
1000 15000 5996
1200 18000 5404
1400 20000 3701
1600 23000 2877
1800 26000 2657
2000 29000 2612

cheers,

-- stephane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [SQL] PERSISTANT PREPARE (another point of view)

2008/7/20 Milan Oparnica <milan.opa@gmail.com>:
> Pavel wrote:
>
>>
>> try to write prototype and show advantages...
>
> Prototype of what, implementation into Postgre or just efficiency of
> PRESISTANT PREPARE idea ?

really prototype


>
>> ...but I see some disadvatage
>> too. Mainly you have to manage some shared memory space for stored
>> plans. It's not easy task - MySQL develepoers can talk. Implemenation
>> on postgresql is little bit dificult - lot of structures that lives in
>> processed memory have to be moved to shared memory.
>>
>
> Is it solved in MySQL or they've just tried ?

http://www.mysqlperformanceblog.com/2006/08/02/mysql-prepared-statements/

Do not forget to close prepared statements - Many memory leaks
reported in MySQL Server turned out to be prepare statements or
cursors which were forgotten to be closed. Watch Com_stmt_prepare and
Com_stmt_close to see if you're closing all prepared statements. In
newer versions you can also use prepared_stmt_count variable to track
number of open statements diretly. You can also adjust
max_prepared_stmt_count variable which limits how many statements can
be open at the same time to avoid overload.

>
> We could have only PREP STATEMENT definition stored in shared memory
> (probably something like stored procedures), and it could be run in local
> processed memory. We could even assume only fetching data would be used
> through PREP STATEMENTS for start, and later introduce data modification. Is
> there some simplified PG algorithm we could use to understand the amount of
> work needed for introducing such feature to PG?


there is some complications with portability - shared memory is slow
on windows :( but probably there isn't problem save plan into shared
memory. Main difficulties is memory maintaining.
>
>> This feature is nice, but question is - who do write it?
>
> With a little help form PG developers and good documentation perhaps I could
> put some programmers from my team on this job. They are mostly C++
> programmers but we have Delphi and Java if needed.

PostgreSQL is solo C code
http://wiki.postgresql.org/wiki/Development_information

>
>> Actually this problem is solved from outside - with pooling.
>>
>
> I'm very interested to learn more about this solution. Can you please send
> me details or some links where I could research this solution ?

pgpool??

>
>
> Thank you for your reply Pavel.

regards
Pavel
>
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql
>

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [PATCHES] pg_dump additional options for performance

On Mon, 2008-07-21 at 19:19 -0400, Tom Lane wrote:
> Stephen Frost <sfrost@snowman.net> writes:
> > Are there use cases for just --omit-post-load or --omit-pre-load?
>
> Probably not many. The thing that's bothering me is the
> action-at-a-distance property of the positive-logic switches.
> How are we going to explain this?
>
> "By default, --schema-pre-load, --data-only, --schema-post-load
> are all ON. But if you turn one of them ON (never mind that
> it was already ON by default), that changes the defaults for
> the other two to OFF. Then you have to turn them ON (never
> mind that the default for them is ON) if you want two out of
> the three categories."

While I accept your argument a certain amount, --schema-only and
--data-only already behave in the manner you describe. Whether we pick
include or exclude or both, it will make more sense than these existing
options, regrettably.

With regard to the logic, Insert and COPY also behave this way: if you
mention *any* columns then you only get the ones you mention. We manage
to describe that also. An Insert statement would be very confusing if
you had to list the columns you don't want.

So the --omit options seem OK if you assume we'll never add further
options or include additional SQL in the dump. But that seems an
unreliable prop, so I am inclined towards the inclusive approach.

> You have to bend your mind into a pretzel to wrap it around this
> behavior.

Perhaps my mind was already toroidally challenged? :-}

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] [WIP] collation support revisited (phase 1)

On Mon, Jul 21, 2008 at 03:15:56AM +0200, Radek Strnad wrote:
> I was trying to sort out the problem with not creating new catalog for
> character sets and I came up following ideas. Correct me if my ideas are
> wrong.
>
> Since collation has to have a defined character set.

Not really. AIUI at least glibc and ICU define a collation over all
possible characters (ie unicode). When you create a locale you take a
subset and use that. Think about it: if you want to sort strings and
one of them happens to contain a chinese charater, it can't *fail*.
Note strcoll() has no error return for unknown characters.

> I'm suggesting to use
> already written infrastructure of encodings and to use list of encodings in
> chklocale.c. Currently databases are not created with specified character
> set but with specified encoding. I think instead of pointing a record in
> collation catalog to another record in character set catalog we might use
> only name (string) of the encoding.

That's reasonable. From an abstract point of view collations and
encodings are orthoginal, it's only when you're using POSIX locales
that there are limitations on how you combine them. I think you can
assume a collation can handle any characters that can be produced by
encoding.

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

[pdxpug] PDXPUG Day Photos

A selection of my photos from yesterday's auspicious events:

http://www.flickr.com/photos/theory/sets/72157606306835052/

Enjoy,

David

--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

Re: [PATCHES] pg_dump lock timeout

*** a/doc/src/sgml/ref/pg_dumpall.sgml
--- b/doc/src/sgml/ref/pg_dumpall.sgml
***************
*** 196,201 **** PostgreSQL documentation
--- 196,217 ----
</varlistentry>

<varlistentry>
+ <term><option>--lock-wait-timeout=<replaceable class="parameter">timeout</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not wait forever to acquire shared table locks at the beginning of
+ the dump. Instead fail if unable to lock a table within the specified
+ <replaceable class="parameter">timeout</>. The timeout may be
+ specified in any of the formats accepted by <command>SET
+ statement_timeout</>. (Allowed values vary depending on the server
+ version you are dumping from, but an integer number of milliseconds
+ is accepted by all versions since 7.3. This option is ignored when
+ dumping from a pre-7.3 server.)
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>--no-tablespaces</option></term>
<listitem>
<para>
*** a/src/bin/pg_dump/pg_dumpall.c
--- b/src/bin/pg_dump/pg_dumpall.c
***************
*** 120,125 **** main(int argc, char *argv[])
--- 120,126 ----
{"disable-triggers", no_argument, &disable_triggers, 1},
{"no-tablespaces", no_argument, &no_tablespaces, 1},
{"use-set-session-authorization", no_argument, &use_setsessauth, 1},
+ {"lock-wait-timeout", required_argument, NULL, 2},

{NULL, 0, NULL, 0}
};
***************
*** 305,310 **** main(int argc, char *argv[])
--- 306,316 ----
case 0:
break;

+ case 2:
+ appendPQExpBuffer(pgdumpopts, " --lock-wait-timeout=");
+ appendPQExpBuffer(pgdumpopts, optarg);
+ break;
+
default:
fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname);
exit(1);
***************
*** 488,493 **** help(void)
--- 494,500 ----
printf(_(" -f, --file=FILENAME output file name\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
+ printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_("\nOptions controlling the output content:\n"));
printf(_(" -a, --data-only dump only the data, not the schema\n"));
printf(_(" -c, --clean clean (drop) databases prior to create\n"));
On Mon, Jul 21, 2008 at 03:43:11AM -0400, Tom Lane wrote:
> daveg <daveg@sonic.net> writes:
> > On Sun, Jul 20, 2008 at 02:50:50PM -0400, Tom Lane wrote:
> >> In most cases our policy has been that pg_dumpall should accept and pass
> >> through any pg_dump option for which it's sensible to do so. I did not
> >> make that happen but it seems it'd be a reasonable follow-on patch.
>
> > I'll remember that next time.
>
> Er .. actually that was a direct request for you to do it.


Attached is a the followon patch for pg_dumpall and docs to match pg_dump.

On a second topic, is anyone working on a parallel dump/load? I'd be
interested in helping.

-dg

--
David Gould daveg@sonic.net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.

Re: [pgsql-es-ayuda] como lograr campo consecutivo sin fallar ?

Alvaro Herrera dijo [Thu, Jul 17, 2008 at 01:33:41PM -0400]:
> Claro, pero para hacer una asignación numérica que "no falle" no puedes
> usar una secuencia. Tienes que bloquear la tabla y hacer la inserción
> del siguiente número disponible. Esto obviamente tiene menor
> rendimiento (solo puede haber un proceso insertando a la vez), pero te
> aseguras que no habrá agujeros en la numeración.
>
> Dado que la generación de facturas no es una cosa terriblemente
> frecuente, el menor rendimiento no debería ser un problema. Cada
> transacción debería tomar menos de un segundo de todas maneras. Un
> usuario que tiene que esperar un segundo más, no se da ni cuenta de la
> diferencia.

Pero incluso siendo este el caso, yo no usaría al número de factura
como la llave primaria. Generaría en todo caso algo como:

CREATE TABLE factura (
id SERIAL PRIMARY KEY,
factura integer NOT NULL DEFAULT siguiente_factura(),
otro_campo blah blah
);

Claro, poniendo la lógica en cuestión bajo siguiente_factura(), y
creándole un bonito índice para el rendimiento. Sí, sé que el número
de factura no _debe_ cambiar, pero... Bueno, todos los atributos que
un humano pueda tocar son malas opciones para una llave primaria. Y no
sé cómo sea en sus países, pero en México tú no puedes imprimir el
número de factura - tiene que venir pre-impreso por un impresor
autorizado. Eso sólo significa que el número de factura se tiene que
capturar una vez teniendo el papelito impreso (o listo en la
impresora).

Saludos,

--
Gunnar Wolf - gwolf@gwolf.org - (+52-55)5623-0154 / 1451-2244
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973 F800 D80E F35A 8BB5 27AF
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [PERFORM] A guide/tutorial to performance monitoring and tuning

On Mon, 21 Jul 2008, Francisco Reyes wrote:

> On 2:59 pm 06/29/08 Greg Smith <gsmith@gregsmith.com> wrote:
>> Right now I'm working with a few other people to put together a more
>> straightforward single intro guide that should address some of the
>> vagueness you point out here,
>
> Was that ever completed?

Not done yet; we're planning to have a first rev done in another couple of
weeks. The work in progress is at
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and I'm due
to work out another set of improvements to that this week during OSCON.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[COMMITTERS] npgsql - Npgsql2: quote identifiers when generating sql

Log Message:
-----------
quote identifiers when generating sql

Modified Files:
--------------
Npgsql2/src/Npgsql/SqlGenerators:
SqlSelectGenerator.cs (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlSelectGenerator.cs.diff?r1=1.5&r2=1.6)
VisitedExpression.cs (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/VisitedExpression.cs.diff?r1=1.5&r2=1.6)
SqlBaseGenerator.cs (r1.10 -> r1.11)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlBaseGenerator.cs.diff?r1=1.10&r2=1.11)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: Improve function metadata queries

Log Message:
-----------
Improve function metadata queries

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlSchema.ssdl (r1.3 -> r1.4)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlSchema.ssdl.diff?r1=1.3&r2=1.4)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Concurrent VACUUM and ANALYZE

Jonah H. Harris wrote:
> On Mon, Jul 21, 2008 at 10:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
>> I don't find this a compelling argument, at least not without proof that
>> the various vacuum-improvement projects already on the radar screen
>> (DSM-driven vacuum, etc) aren't going to fix your problem.
>>
>
> Is DSM going to be in 8.4? The last I had heard, DSM+related
> improvements weren't close to being guaranteed for this release. If
> it doesn't make it, waiting another year and a half for something
> easily fixed would be fairly unacceptable. Should I provide a patch
> in the event that DSM doesn't make it?

Can't hurt to submit a patch. Also, could you do something to help
mitigate the worse case, something like don't update the stats in
pg_class if the analyze finishes after a vacuum has finished since the
current analyze started?

Matt


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] Moderation of pgsql-cygwin?

FWIW I never recieved the "this list is deprecated" email on the cygwin list,
though I did recieve it on the interfaces and other list... if that's what
you're wondering about, you might want to resend...

Robert Treat

On Sunday 20 July 2008 14:47:49 Marc G. Fournier wrote:
> I just checked through what is in the queue, and there is only about 7 spam
> messages 'to be approved' sitting there ... nothing legit ...
>
>
>
> --On Sunday, July 20, 2008 16:51:14 +0300 Peter Eisentraut
> <peter_e@gmx.net>
>
> wrote:
> > Is anyone moderating pgsql-cygwin? I am not seeing my
> > held-for-moderation mail getting through. At least it doesn't show up on
> > the archives web page.
> >
> > --
> > Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-www
>
> --
> Marc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)
> Email . scrappy@hub.org MSN . scrappy@hub.org
> Yahoo . yscrappy Skype: hub.org ICQ . 7615664

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-es-ayuda] RE: [pgsql-es-ayuda] Indice autonumérico

2008/7/21 frank <frankyfa@gmail.com>:
> Pero si el PGAdmin III no me muestra el tipo de dato serial entonces cómo podría hacer para que el campo ID de una tabla X sea autonumerico?
>

actualiza...


--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [HACKERS] Schema-qualified statements in pg_dump output

At 8:34 AM +0100 7/11/08, Simon Riggs wrote:
>On Mon, 2008-07-07 at 15:46 +0200, Bernd Helmle wrote:
>> There's a behavior in pg_dump that annoyed me a little bit, the last few
>> times i had to deal with it:
>>
>> Consider you have to dump a specific namespace only, you are going to use
>>
>> pg_dump -n <your_schema> [-t <tables>].
>>
>> I found it a common use case to restore this dump into a different schema
>> by simply changing the search_path. With included ownerships this doesn't
>> work, since pg_dump always outputs the necessary DDL as follows:
>>
>> ALTER TABLE bernd.foo OWNER TO bernd;
>>
>> Okay, it isn't too hard to use sed to replace the necessary statements to
>> use the correct schema, but i think it would be much nicer if pg_dump would
>> omit the schema-qualified table name here. I'd like to create a patch for
>> this, if we agree on changing this behavior?
>
>The use case you mention is something that would be of value to many
>people, and I support your efforts to add a new option for this.
>
>No useful workarounds exist without flaws: i) editing with sed might
>well end up editing character data in the table(s) at the same time and
>you may never even notice. ii) reloading to the same schema (renaming
>etc) is not acceptable if the target has a production schema of that
>name already. iii) manually editing a large file is problematic.
>
>Tom's posted comments that you need to look at all of the places the
>schemaname is used to see what we will need/not need to change. It's
>more than just altering the owner, but that doesn't mean we don't want
>it or its impossible.
>
>Please pursue this further.

I've been looking into this matter, although I'm a noob apropos
PostgreSQL hacking. What I thought was a better way was to alter
pg_dump to accept a flag -m <masquerade_name>. It would require the
-n <schema_name> option or fail.

It would generate a schema dump where all the references to
<schema_name> were replaced by <masquerade_name>.

This would allow you to easily make a copy of a schema into a new schema.

My needs are that my production database is the "public" schema, and
each year I want to archive "fy2007", "fy2008", etc. schemas which
have the final information for those years. So at the end of this
year, I want to duplicate the "public" schema into the "fy2008"
schema, and continue with "public."

I could do the pg_dump "public", rename "public" to "fy2008" and then
restore "public," but this requires being without "public" for a
short interval. It would be better for me to simply:

pgsql database < pg_dump -c -n public -m fy2008

And that would give you a completely mechanical way to duplicate a
schema, which means I could put it in a script that users could call.

From what I've seen, it would mean finding where the schema is
currently accessed in the code, then substituting on the -m flag.

Having already done this with manually editing the files, it really
cries out for a better procedure.

Perhaps my solution is excessive compared to the other offered
solution, but it would have the benefit that the user would know
precisely what he was doing by the flag setting.

-Owen

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] Re: [pgsql-es-ayuda] RE: [pgsql-es-ayuda] Indice autonumérico

--- El lun 21-jul-08, frank <frankyfa@gmail.com> escribió:

> De: frank <frankyfa@gmail.com>
> Asunto: [pgsql-es-ayuda] RE: [pgsql-es-ayuda] Indice autonumérico
> A: pgsql-es-ayuda@postgresql.org
> Fecha: lunes, 21 julio, 2008, 5:17 pm
> Pero si el PGAdmin III no me muestra el tipo de dato serial
> entonces cómo podría hacer para que el campo ID de una
> tabla X sea autonumerico?
>
> Dicho campo lo tengo definido como int4.
>
> Como comentaba lo de las secuencias, el PGAdmin III me da
> la opción de crear secuencias (sequence), creé una con
> incremento de 1 pero al momento de crear un campo en una
> tabla no puedo vincular dicho campo a ninguna secuencia
> pues aparece esta opción pero deshabilitada.
>
> >
> > 2008/7/21 frank <frankyfa@gmail.com>:
> > > Disculpen mi ignorancia pero estoy utilizando el
> PGAdmin III para adminisdtrar la
> > BD y no veo el tipo de datos serial. Lo que hay por
> allí es algo para crear secuencias
> > pero no se si esto sirve para mis propósitos ya que
> no puedo hacer funcionar estas
> > secuencias y la verdad no entiendo la documentación
> de esta parte.
> > >
> >
> > el pgadmin no mostraba (en la epoca del 8.1 hacia
> atras) el tipo de
> > de datos serial porque no es un tipo de dato, es solo
> una forma
> > abreviada de crear una secuencia, y crear un campo
> integer que tenga
> > como valor default un nextval de la secuencia
>
>
> --
> TIP 8: explain analyze es tu amigo

Si lees bien el theread ya te pues la forma cuando puse a manopla, quise decir a mano, con un poco mas de escritura, o sea un poco mas de esfuerzo.

Mi castellano a veces es muy malo, pero por las dudas te pongo el ejemplo de nuevo que desde el lugar de escritura de sentencias de pgadmin III lo puedes ejecutar adaptandolo a tu caso

no recuerdo muy bien, prueba con:
- crear una serie
- crear una tabla y en el campo poner de default la funcion que devuelve el
proximo de la serie.

sería algo así (no lo he probado):

CREATE SEQUENCE identity1;
CREATE TABLE prueba (
id BIGINT PRIMARY KEY DEFAULT NEXTVAL('identity1'),
otrocampo VARCHAR(100)
);

Atte.
Gabriel Colina


____________________________________________________________________________________
Yahoo! MTV Blog & Rock &gt;¡Cuéntanos tu historia, inspira una canción y gánate un viaje a los Premios MTV! Participa aquí http://mtvla.yahoo.com/
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?
http://archives.postgresql.org/pgsql-es-ayuda

Re: [HACKERS] Concurrent VACUUM and ANALYZE

On Mon, Jul 21, 2008 at 10:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Jonah H. Harris" <jonah.harris@gmail.com> writes:
>> The case I'm looking at is a large table which requires a lazy vacuum,
>> and a zero vacuum cost delay would cause too much I/O. Yet, this
>> table has enough insert/delete activity during a vacuum, that it
>> requires a fairly frequent analysis to maintain proper plans. I
>> patched as mentioned above and didn't run across any unexpected
>> issues; the only one expected was that mentioned by Alvaro.
>
> I don't find this a compelling argument, at least not without proof that
> the various vacuum-improvement projects already on the radar screen
> (DSM-driven vacuum, etc) aren't going to fix your problem.

Is DSM going to be in 8.4? The last I had heard, DSM+related
improvements weren't close to being guaranteed for this release. If
it doesn't make it, waiting another year and a half for something
easily fixed would be fairly unacceptable. Should I provide a patch
in the event that DSM doesn't make it?

-Jonah

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] PERSISTANT PREPARE (another point of view)

Earlier, I wrote:

> Exactly what is gained by the use of persistent prepare over the use of
> a stored procedure?

Er, ENOCOFFEE. Sorry.

The benefit is obvious with use of global prepared statements at the
wire protocol level rather than via SQL EXECUTE . It's a lot more
efficient than EXECUTE or SELECT function(params).

It's pretty clear that there are some tricky aspects though, what with
schema search paths, role priveleges, etc.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Concurrent VACUUM and ANALYZE

"Jonah H. Harris" <jonah.harris@gmail.com> writes:
> The case I'm looking at is a large table which requires a lazy vacuum,
> and a zero vacuum cost delay would cause too much I/O. Yet, this
> table has enough insert/delete activity during a vacuum, that it
> requires a fairly frequent analysis to maintain proper plans. I
> patched as mentioned above and didn't run across any unexpected
> issues; the only one expected was that mentioned by Alvaro.

I don't find this a compelling argument, at least not without proof that
the various vacuum-improvement projects already on the radar screen
(DSM-driven vacuum, etc) aren't going to fix your problem.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] ERROR: could not open relation with OID 49152

On Mon, Jul 21, 2008 at 6:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Scott Marlowe" <scott.marlowe@gmail.com> writes:
>> OK, I'm getting the above error on one of my fairly new 8.3 production
>> databases. It happens when I run a query to see the size of my
>> tables.
>
>> SELECT pg_relation_size(c.relfilenode),
>
> Pretty sure that should be c.oid.

Thanks. I'll slink away now...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Concurrent VACUUM and ANALYZE

On Mon, Jul 21, 2008 at 7:59 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Jonah H. Harris" <jonah.harris@gmail.com> writes:
>> Because we wouldn't want multiple ANALYZEs running on the same table,
>> changing the lock back to an AccessShareLock doesn't sound like a
>> solution.
>
> It flat will not work. We used to do it that way, and it didn't
> (search for "tuple concurrently updated" in the archives).

Agreed.

>> However, what are the thoughts around creating another,
>> more-specific lock? Perhaps something like ShareUpdateAnalysisLock?
>
> The general overhead involved in a whole new lock type is high enough
> that I would resist taking this path. (It's certainly a lot more than
> adding an entry to one enum someplace --- offhand I can name docs and
> grammar as important issues. And no you don't get to have a hidden lock
> type that no one can see.)

Any other suggestions?

> Also, as Alvaro points out, it's far from clear that concurrent VACUUM
> and ANALYZE is as safe as you think --- they both want to write the same
> fields in pg_class.

AFAICS, Alvaro didn't say that at all. At worst, if ANALYZE completed
after VACUUM, its stats wouldn't be as good as those set by VACUUM.
But, as I said in response to Alvaro, that's no different than running
ANALYZE immediately following VACUUM.

The case I'm looking at is a large table which requires a lazy vacuum,
and a zero vacuum cost delay would cause too much I/O. Yet, this
table has enough insert/delete activity during a vacuum, that it
requires a fairly frequent analysis to maintain proper plans. I
patched as mentioned above and didn't run across any unexpected
issues; the only one expected was that mentioned by Alvaro.

-Jonah

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] plproxy v2

"Marko Kreen" <markokr@gmail.com> writes:
> On 7/21/08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I looked through this a bit, and my principal reaction was "what are
>> the security implications?

> There are 2 aspects to it:

> 1. Function can be created only by superuser.

What I'm concerned about is who they can be *called* by. I'd be happier
if the default behavior was that there was no public execute privilege
for plproxy functions.

I think right now that could be enforced by having plproxy's validator
procedure replace any null proacl entry with something that explicitly
refuses public execute. That's a bit of a hack though. Maybe it'd be
worth inventing per-PL default ACLs, instead of having a
one-size-fits-all policy?

> 2. If cluster connection strings do not have 'user=' key,
> ' user=' || current_username() is appended to it.

Cool, I missed that. At minimum the documentation has to explain this
point and emphasize the security implications. Is it a good idea
to allow user= in the cluster strings at all?

> Also, plroxy does
> _nothing_ with passwords. That means the password for remote
> connection must be in postgres user's .pgpass,

That seems *exactly* backwards, because putting the password in postgres
user's .pgpass is as good as disabling password auth altogether.
Consider that it would also hand all the keys to the kingdom over to
someone who had access to dblink on the same machine (not even the same
cluster, so long as it was run by the same postgres user!).

> But I don't think plproxy can and should protect dumb admins who
> create remote_exec(sql) function and allow untrusted users to
> execute it.

We regularly get beat up about any aspect of our security apparatus
that isn't "secure by default". This definitely isn't, and from
a PR point of view (if nothing else) that doesn't seem a good idea.

I repeat that I don't feel comfortable in the least with plproxy's
security model.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Concurrent VACUUM and ANALYZE

On Mon, Jul 21, 2008 at 6:15 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> This means that VACUUM ANALYZE must grab both locks ... is there a
> gotcha here?

Agreed.

> The main problem I see with this idea is that the dead and total tuple
> count computed by ANALYZE would be immediately out of date, and if it
> happens to finish after VACUUM then it'll overwrite the values the
> latter just wrote, which are more correct. Not sure how serious a
> problem this is.

Agreed, but in the worst case, it's no different than running ANALYZE
immediately following a VACUUM.

-Jonah

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] PERSISTANT PREPARE (another point of view)

Milan Oparnica wrote:
> I found this link from IBM DB2 developers showing why PERSISTENT PREPARE
> is a good idea and how could it be implemented.

[snip]

> NONE OF POPULAR SQL DBMS (Oracle, MS SQL, MySQL, Postgre, INTERBASE,
> FIREBIRD) HAVE THIS FEATURE.
>
> WHY ?

I suspect that people tend to use SQL or PL/PgSQL stored procedures
instead. I'm not 100% sure SQL functions cache their query plans, but I
know PL/PgSQL does.

Exactly what is gained by the use of persistent prepare over the use of
a stored procedure?

What would the interface to the feature be through database access
drivers like JDBC? Explicit PREPARE GLOBAL or similar, then invocation
with EXECUTE ?

How would users using increasingly common layers like Hibernate/JPA use it?

I'm also curious about how you'd address the possible need for periodic
re-planning as the data changes, though AFAIK SQL functions suffer from
the same limitation there.

I guess I personally just don't understand what the point of the
persistent prepare feature you describe is. However, this post that you
linked to:

http://archives.postgresql.org/pgsql-hackers/2008-04/msg00867.php

actually describes a query plan cache, rather than persistent prepare.
The post assumes the app will explicitly manage the cache, which I'm not
sure is a good idea, but I can see the point of a plan cache. There
might be some heuristics Pg could use to decide what to cache and to
evict (planner time cost vs memory use, frequency of use, etc) so the
app doesn't have to know or care about the plan cache. However, I'm not
too sure how you'd match an incoming query to a cached plan, and
determine that the plan was still valid, with enough speed to really
benefit from the plan cache. Then again, I don't know much about Pg's
innards, so that doesn't mean much.

Tom Lane responded to that post to point out some of the complexities:

http://archives.postgresql.org/pgsql-hackers/2008-04/msg00868.php

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [GENERAL] ERROR: could not open relation with OID 49152

Scott Marlowe escribió:
> OK, I'm getting the above error on one of my fairly new 8.3 production
> databases. It happens when I run a query to see the size of my
> tables.
>
> SELECT pg_relation_size(c.relfilenode), n.nspname AS schemaname,
> c.relname AS tablename, pg_get_userbyid(c.relowner) AS tableowner,
> t.spcname AS tablespace, c.relhasindex AS hasindexes, c.relhasrules AS
> hasrules, c.reltriggers > 0 AS hastriggers
> FROM pg_class c
> LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
> LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace
> WHERE n.nspname = 'public';
> ERROR: could not open relation with OID 49152

Try pg_relation_size(c.oid) instead.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] ERROR: could not open relation with OID 49152

"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> OK, I'm getting the above error on one of my fairly new 8.3 production
> databases. It happens when I run a query to see the size of my
> tables.

> SELECT pg_relation_size(c.relfilenode),

Pretty sure that should be c.oid.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

Emil Pedersen <emil.pedersen@its.uu.se> writes:
>> At least on debian it was quite easy to "backport" 8.3.3 from sid
>> to etch using apt-get's source and build-dep functions. That way
>> you get a normal installable package.

> I should have said that I was talking about the postgresql, I
> missed the plr part. I appologize for the noice.

Still, there's not normally that much difference between the packaging
for one version and for the next. I can't imagine that it would take
much time to throw together a package for 8.3 plr based on what you're
using for 8.2. All modern package-based distros make this pretty easy.
The only reason not to do it would be if you're buying support from
a vendor who will only support specific package versions...

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Concurrent VACUUM and ANALYZE

"Jonah H. Harris" <jonah.harris@gmail.com> writes:
> Because we wouldn't want multiple ANALYZEs running on the same table,
> changing the lock back to an AccessShareLock doesn't sound like a
> solution.

It flat will not work. We used to do it that way, and it didn't
(search for "tuple concurrently updated" in the archives).

> However, what are the thoughts around creating another,
> more-specific lock? Perhaps something like ShareUpdateAnalysisLock?

The general overhead involved in a whole new lock type is high enough
that I would resist taking this path. (It's certainly a lot more than
adding an entry to one enum someplace --- offhand I can name docs and
grammar as important issues. And no you don't get to have a hidden lock
type that no one can see.)


Also, as Alvaro points out, it's far from clear that concurrent VACUUM
and ANALYZE is as safe as you think --- they both want to write the same
fields in pg_class.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] ERROR: could not open relation with OID 49152

OK, I'm getting the above error on one of my fairly new 8.3 production
databases. It happens when I run a query to see the size of my
tables.

SELECT pg_relation_size(c.relfilenode), n.nspname AS schemaname,
c.relname AS tablename, pg_get_userbyid(c.relowner) AS tableowner,
t.spcname AS tablespace, c.relhasindex AS hasindexes, c.relhasrules AS
hasrules, c.reltriggers > 0 AS hastriggers
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace
WHERE n.nspname = 'public';
ERROR: could not open relation with OID 49152

If I run this query:

SELECT c.relfilenode, n.nspna... same as the above after this...

I get this:

relfilenode | schemaname | tablename | tableowner |
tablespace | hasindexes | hasrules | hastriggers
-------------+------------+----------------------------+------------+------------+------------+----------+-------------
49956 | public | paid_idx_pkey | dgish |
| f | f | f
49958 | public | page_access_timestamp | dgish |
| f | f | f
49152 | public | page_access | dgish |
| t | f | f

and a few other lines editted out for brevity.

Any ideas what the cause of this is? What part of my catalogs is borked?

Now, I can backup this db, and restore it with no errors, and the
error will go away. Same query on the same dataset returns the right
number of rows and their size.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Load spikes on 8.1.11

Andrew Sullivan <ajs@commandprompt.com> writes:
> On Tue, Jul 22, 2008 at 02:41:55AM +0530, Gurjeet Singh wrote:
>> I am aware of the heavy locking involved with Slony, which should mean that
>> it blocks the application connections; that's be completely acceptable,
>> given all the warnings in the Slony docs. But what I am concerned about and
>> trying to hunt down is why <IDLE> backend processes are all consuming up all
>> of CPU (!!!) so much so that I am unable to fire up any new process!

> Ah, well, then, yes, the spinlock improvements probably will help
> you. But you should disabuse yourself of the idea that <IDLE>
> processes have no cost. You still have to talk to all those
> connections when doing schema changes.

Yeah. In fact this is sounding more and more like the known problem
with sinval message response causing a "thundering herd" effect: the
idle processes all sit idle until the sinval message queue gets long
enough to rouse alarm bells, and then they all get signaled at once
and all try to clean the message queue at once, leading to very
heavy contention for the SInvalLock. That code's been rewritten in
CVS HEAD to try to alleviate the problem, but no existing release
has the fix.

See thread here for prior report:
http://archives.postgresql.org/pgsql-performance/2008-01/msg00001.php

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

--On tisdag, juli 22, 2008 01.20.52 +0200 Emil Pedersen
<emil.pedersen@its.uu.se> wrote:

>
> [...]
>
>> Yes I'd definitely prefer to go 8.3 as well but there are a couple
>> reasons for now I have to suck it up:
>> - 8.2 is the one in the 7.10 repository.
>> - I need plr as well and 8.3-plr debian package does not exist yet.
>>
>> (I know in both cases we could recompile and install it from there,
>> but ...)
>
> At least on debian it was quite easy to "backport" 8.3.3 from sid
> to etch using apt-get's source and build-dep functions. That way
> you get a normal installable package.
>
> I'm not sure, but given the similarity I would guess it won't be
> much harder on ubuntu.

I should have said that I was talking about the postgresql, I
missed the plr part. I appologize for the noice.

// Emil


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgadmin-hackers] Dialogs review

Dave Page a écrit :
> On Sat, Jul 19, 2008 at 3:50 PM, Guillaume Lelarge
> <guillaume@lelarge.info> wrote:
>> Okay... the wxListCtrl resize problem seems to be a wxMac confirmed bug. See
>> http://trac.wxwidgets.org/ticket/4814 bug report for more details. Not sure
>> about what we should do with this... debug the stuff on wxMac source files?
>> or simply put it in the BUGS file and continue the work ?
>
> I was trying to come up with a test case for this using the xrc sample
> app in wxWidgets - however it seems to work fine. I used the changes
> below - can you see what's different?:
>

I worked all the evening on this. The xrc sample app works for me. It
even works with our Definition notebook page. So, we should be able to
make it work on pgAdmin.

I tried to simplify our xrc file in pgAdmin, but I couldn't find a way
to make it work.


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] dlgCheck review

<?xml version="1.0" encoding="ISO-8859-1"?>
<resource>
<object class="wxDialog" name="dlgCheck">
<title></title>
<style>wxDEFAULT_DIALOG_STYLE|wxCAPTION|wxSYSTEM_MENU|wxRESIZE_BORDER|wxRESIZE_BOX|wxTHICK_FRAME</style>
<object class="wxFlexGridSizer">
<cols>1</cols>
<growablecols>0</growablecols>
<growablerows>0</growablerows>
<object class="sizeritem">
<object class="wxNotebook" name="nbNotebook">
<object class="notebookpage">
<label>Properties</label>
<object class="wxPanel" name="pnlProperties">
<object class="wxFlexGridSizer">
<cols>2</cols>
<rows>4</rows>
<vgap>5</vgap>
<hgap>5</hgap>
<growablerows>1</growablerows>
<growablecols>1</growablecols>
<object class="sizeritem">
<object class="wxStaticText" name="stName">
<label>Name</label>
<pos>5,7d</pos>
</object>
<flag>wxALIGN_CENTRE_VERTICAL|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxTextCtrl" name="txtName">
<pos>70,5d</pos>
<size>135,-1d</size>
</object>
<flag>wxEXPAND|wxALIGN_TOP|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxStaticText" name="stWhere">
<label>Check</label>
<pos>5,22d</pos>
</object>
<flag>wxALIGN_CENTRE_VERTICAL|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxTextCtrl" name="txtWhere">
<pos>70,20d</pos>
<size>135,109d</size>
<style>wxTE_MULTILINE</style>
</object>
<flag>wxEXPAND|wxALIGN_TOP|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxStaticText" name="stComment">
<label>Comment</label>
<pos>5,134d</pos>
</object>
<flag>wxALIGN_CENTRE_VERTICAL|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxTextCtrl" name="txtComment">
<pos>70,132d</pos>
<size>135,49d</size>
<style>wxTE_MULTILINE</style>
</object>
<flag>wxEXPAND|wxALIGN_TOP|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxStaticText" name="stClusterSet">
<label>Use replication</label>
<pos>5,186d</pos>
</object>
<flag>wxALIGN_CENTRE_VERTICAL|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<object class="sizeritem">
<object class="wxComboBox" name="cbClusterSet">
<content/>
<pos>70,184d</pos>
<size>135,12d</size>
<style>wxCB_READONLY|wxCB_DROPDOWN</style>
</object>
<flag>wxEXPAND|wxALIGN_TOP|wxTOP|wxLEFT|wxRIGHT</flag>
<border>4</border>
</object>
<flag>wxTOP|wxLEFT|wxRIGHT|wxGROW</flag>
</object>
</object>
</object>
<selected>1</selected>
</object>
<pos>2,2d</pos>
<size>214,215d</size>
<flag>wxALL|wxGROW|wxALIGN_CENTRE</flag>
<border>3</border>
</object>
<object class="spacer">
<size>2,2d</size>
</object>
<object class="sizeritem">
<object class="wxFlexGridSizer">
<cols>7</cols>
<object class="spacer">
<size>3,3d</size>
</object>
<object class="sizeritem">
<object class="wxButton" name="wxID_HELP">
<label>Help</label>
<pos>135,220d</pos>
</object>
</object>
<object class="spacer">
<size>3,3d</size>
</object>
<object class="sizeritem">
<object class="wxButton" name="wxID_OK">
<label>&amp;OK</label>
<default>1</default>
<pos>135,220d</pos>
</object>
</object>
<object class="spacer">
<size>3,3d</size>
</object>
<object class="sizeritem">
<object class="wxButton" name="wxID_CANCEL">
<label>&amp;Cancel</label>
<pos>176,220d</pos>
</object>
</object>
<object class="spacer">
<size>3,3d</size>
</object>
<growablecols>2</growablecols>
</object>
<flag>wxTOP|wxLEFT|wxRIGHT|wxGROW</flag>
</object>
<object class="spacer">
<size>3,3d</size>
</object>
<object class="sizeritem">
<object class="unknown" name="unkStatusBar">
<size>-1,15d</size>
</object>
<flag>wxGROW|wxALIGN_CENTRE</flag>
<border>3</border>
</object>
</object>
</object>
</resource>
A small property dialog. Works on Linux and Mac OS X.

Comments?


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

[...]

> Yes I'd definitely prefer to go 8.3 as well but there are a couple
> reasons for now I have to suck it up:
> - 8.2 is the one in the 7.10 repository.
> - I need plr as well and 8.3-plr debian package does not exist yet.
>
> (I know in both cases we could recompile and install it from there,
> but ...)

At least on debian it was quite easy to "backport" 8.3.3 from sid
to etch using apt-get's source and build-dep functions. That way
you get a normal installable package.

I'm not sure, but given the similarity I would guess it won't be
much harder on ubuntu.

// Emil


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Less rows -> better performance?

Richard, thanks for your reply!

Richard Huxton schrieb:
> Andreas Hartmann wrote:
>> Dear PostgreSQL community,
>>
>> first some info about our application:
>>
>> - Online course directory for a University
>> - Amount of data: complete dump is 27 MB
>> - Semester is part of primary key in each table
>> - Data for approx. 10 semesters stored in the DB
>> - Read-only access from web application (JDBC)
>>
>> Our client has asked us if the performance of the application could be
>> improved by moving the data from previous years to a separate "archive"
>> application.
>
> If you had 27GB of data maybe, but you've only got 27MB - that's
> presumably all sitting in memory.

Here's some info about the actual amount of data:

SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database where pg_database.datname = 'vvz_live_1';

datname | size
---------------+---------
vvz_live_1 | 2565 MB

I wonder why the actual size is so much bigger than the data-only dump -
is this because of index data etc.?


> What in particular is slow?

There's no particular bottleneck (at least that we're aware of). During
the first couple of days after the beginning of the semester the
application request processing tends to slow down due to the high load
(many students assemble their schedule). The customer upgraded the
hardware (which already helped a lot), but they asked us to find further
approaches to performance optimiziation.

-- Andreas


--
Andreas Hartmann, CTO
BeCompany GmbH
http://www.becompany.ch
Tel.: +41 (0) 43 818 57 01

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[PERFORM] Perl/DBI vs Native

Hi,

I have ran quite a few tests comparing how long a query takes to execute from Perl/DBI as compared to psql/pqlib. No matter how many times I run the test the results were always the same.

I run a SELECT all on a fairly big table and enabled the log_min_duration_statement option. With psql postgres consistently logs half a second while the exact same query executed with Perl/DBI takes again consistently 2 seconds.

If I were timing the applications I would have been too much surprised by these results, obviously, processing with Perl would be slower than a native application. But it's the postmaster that gives these results. Could it be because the DBI module is slower at assimilating the data?

Any light on the subject would be greatly appreciated.


Regards,

Val


__________________________________________________________
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[BUGS] BUG #4319: lower()/upper() does not know about UNICODE case mapping

The following bug has been logged online:

Bug reference: 4319
Logged by: Valentine Gogichashvili
Email address: valgog@gmail.com
PostgreSQL version: 8.3.1
Operating system: SuSE Linux (kernel 2.6.13-15.11-default)
Description: lower()/upper() does not know about UNICODE case mapping
Details:

Hi,

I understand, that it is more a feature, but it does not help me anyways...

On the UNICODE databases lower and upper functions are using system locale
settings (that cannot be changed after initializing DB?) and does not know
anything about UNICODE case mapping.

The problem really becomes 'a problem' on multilingual systems. I have to
store data for German, Russian and Romanian languages together.

On 8.2.3 database with LC_CTYPE set to en_EN, lower() function is actually
corrupting UTF8 data, lowering UTF8 control bytes... I did have a chance to
check if how it works on the 8.3 as I do not have any db instance with the
LC_CTYPE set to en_EN.

I can understand, that LC_COLLATE is something that is not clear from the
context of the character. But case pair is always defined in the UNICODE
standard and should be not dependent of the LC_CTYPE. Or am I wrong?

Regards,

-- Valentine Gogichashvili

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] Less rows -> better performance?

Andreas Hartmann wrote:
> Dear PostgreSQL community,
>
> first some info about our application:
>
> - Online course directory for a University
> - Amount of data: complete dump is 27 MB
> - Semester is part of primary key in each table
> - Data for approx. 10 semesters stored in the DB
> - Read-only access from web application (JDBC)
>
> Our client has asked us if the performance of the application could be
> improved by moving the data from previous years to a separate "archive"
> application.

If you had 27GB of data maybe, but you've only got 27MB - that's
presumably all sitting in memory.

What in particular is slow?

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [BUGS] winsock error 10004

It's a long shot but I found another interesting info which might be the cause for this error.

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2471447&SiteID=1


Regards

Val


--- On Mon, 21/7/08, Rainer Bauer <usenet@munnin.com> wrote:

> From: Rainer Bauer <usenet@munnin.com>
> Subject: Re: [BUGS] winsock error 10004
> To: pgsql-bugs@postgresql.org
> Date: Monday, 21 July, 2008, 11:41 AM
> Cédric Villemain wrote:
>
> >I get the following error with postgresql 8.2.7 on a
> "Microsoft windows server
> >2003 R2 Standard Edition Service pack 1" :
> >
> >« LOG: could not rename temporary statistics file
> "global/pgstat.tmp"
> >to "global/pgstat.stat": Unknown winsock
> error 10004 »
>
> 10004 is WSAEINTR: Interrupted function call (a blocking
> operation was
> interrupted by a call to WSACancelBlockingCall).
>
> Not sure what happens in your case, but I was able to dig
> up this thread:
> <http://archives.postgresql.org/pgsql-patches/2006-04/msg00034.php>
>
> Maybe Magnus or some other Win32 hacker can shed some light
> on it.
>
> Rainer
>
> --
> Sent via pgsql-bugs mailing list
> (pgsql-bugs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-bugs


__________________________________________________________
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[PERFORM] Less rows -> better performance?

Dear PostgreSQL community,

first some info about our application:

- Online course directory for a University
- Amount of data: complete dump is 27 MB
- Semester is part of primary key in each table
- Data for approx. 10 semesters stored in the DB
- Read-only access from web application (JDBC)

Our client has asked us if the performance of the application could be
improved by moving the data from previous years to a separate "archive"
application. This would reduce the overall amount of data in the main
application by about 80% at the moment.

Actually I doubt that this will have the desired effect, since the
semester is part of the primary key in virtually all tables (apart from
some small tables containing string constants etc.), and therefore
indexed. Some tests with EXPLAIN ANALYZE and some web tests (JMeter)
seem to confirm this, the queries showed the same performance with 2 and
10 semesters.

But since I'm not sure yet, I would very much appreciate any answers to
the following questions:

- Do you think the approach (reducing the data) is effective?
- Are there any particular tests which I should do?

Thanks a lot in advance!

-- Andreas

--
Andreas Hartmann, CTO
BeCompany GmbH
http://www.becompany.ch
Tel.: +41 (0) 43 818 57 01

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgsql-es-ayuda][OT] Urgente!!

Hola Carlos...

On Sun, 20 Jul 2008 23:55:47 -0500, Carlos Alberto Cardenas Valdivia wrote
> Quisiera pedirles un gran gran favor...ya no me envien mail...sobre ningun
> tema...mi correo esta por bloquiarse debido a la gran cantidad de
> mail que recibo a diario..Espero me entiendan...Gracias por su comprension

Gmail te está por bloquear porque recibís muchos emails?!? Wow!!...

Esta lista (y la gran mayoría) no te envía emails por molestarte. Si recibís
emails de ésta lista es porque vos (o alguien en en tu nombre que conoce tu
contraseña) por propia voluntad, te suscribiste. Para desuscribirte fijate en
el menú superior, la 4ª opción de ésta página:

http://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org?func=lists-long-full&extra=pgsql-es-ayuda

Saludos y buena semana!

-
-------------------------------------------
Sebastián Villalba
A. Gestión Informática
Facultad de Cs. Médicas
U.N.C. - Argentina
-------------------------------------------
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

Re: [BUGS] winsock error 10004

Cédric Villemain wrote:

>I get the following error with postgresql 8.2.7 on a "Microsoft windows server
>2003 R2 Standard Edition Service pack 1" :
>
>« LOG: could not rename temporary statistics file "global/pgstat.tmp"
>to "global/pgstat.stat": Unknown winsock error 10004 »

10004 is WSAEINTR: Interrupted function call (a blocking operation was
interrupted by a call to WSACancelBlockingCall).

Not sure what happens in your case, but I was able to dig up this thread:
<http://archives.postgresql.org/pgsql-patches/2006-04/msg00034.php>

Maybe Magnus or some other Win32 hacker can shed some light on it.

Rainer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

Greg Smith wrote:
>
> Note that I've had some issues with the desktop Ubuntu giving slower
> results in tests like this than the same kernel release using the
> stock kernel parameters. Haven't had a chance yet to see how the
> server Ubuntu kernel fits into that or exactly what the desktop one is
> doing wrong yet. Could be worse--if you were running any 8.04 I expect
> your pgbench results would be downright awful.

Ah interesting. Isn't it a scheduler problem, I thought CFQ was the
default for desktop ?
I doublechecked the 7.10 server on this box and it's really the deadline
one that is used:

cat /sys/block/sdb/queue/scheduler
noop anticipatory [deadline] cfq

Do you have some more pointers on the 8.04 issues you mentioned ?
(that's deemed to be the next upgrade from ops)

>> postgresql 8.2.9 with data and xlog as mentioned above
> There are so many known performance issues in 8.2 that are improved in
> 8.3 that I'd suggest you really should be considering it for a new
> install at this point.

Yes I'd definitely prefer to go 8.3 as well but there are a couple
reasons for now I have to suck it up:
- 8.2 is the one in the 7.10 repository.
- I need plr as well and 8.3-plr debian package does not exist yet.

(I know in both cases we could recompile and install it from there, but ...)

> In general, you'll want to use a couple of clients per CPU core for
> pgbench tests to get a true look at the scalability. Unfortunately,
> the way the pgbench client runs means that it tends to top out at 20
> or 30 thousand TPS on read-only tests no matter how many cores you
> have around. But you may find operations where peak throughput comes
> at closer to 32 clients here rather than just 8.
ok. Make sense.

> As far as the rest of your results go, Luke's comment that you may
> need more than one process to truly see the upper limit of your disk
> performance is right on target. More useful commentary on that issue
> I'd recomend is near the end of
> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/
>
Yeah I was looking at that url as well. Very useful.

Thanks for all the info Greg.

-- stephane


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[BUGS] Problem loading ispell affix file with apostrophes

I'm having problem with french dictionaries. Loading an ispell affix
file with apostrophes does not work. The file comes from the ifrench
(french dict for ispell) debian source package at
http://packages.debian.org/sid/ifrench

Here's the session excerpt:

------------------------------------------------------------------------
Welcome to psql 8.3.3, the PostgreSQL interactive terminal.

Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit

dockee=# select plainto_tsquery('custom_french', 'bug');
ERROR: syntax error at line 158 of affix file
"/usr/share/postgresql/8.3/tsearch_data/ispell_french.affix"

dockee=# show lc_ctype ;
lc_ctype
-------------
en_US.UTF-8
(1 row)

dockee=# show client_encoding;
client_encoding
-----------------
UTF8
(1 row)

dockee=# show server_version;
server_version
----------------
8.3.3
(1 row)
------------------------------------------------------------------------


The 'custom_french' text configuration is defined as below:

------------------------------------------------------------------------
CREATE TEXT SEARCH CONFIGURATION public.custom_french ( COPY =
pg_catalog.french );

CREATE TEXT SEARCH DICTIONARY french_ispell (
TEMPLATE = ispell,
DictFile = ispell_french,
AffFile = ispell_french
);

ALTER TEXT SEARCH CONFIGURATION custom_french
ALTER MAPPING FOR
asciiword,
asciihword,
hword_asciipart,
word,
hword,
hword_part
WITH french_ispell ;

ALTER TEXT SEARCH CONFIGURATION custom_french
DROP MAPPING FOR
url,url_path,sfloat,float,file,int,version;
------------------------------------------------------------------------


Line 158 of file ispell_french.affix corresponds to the first flag
definition that triggers a prefix with an apostrophe, it's the line
below "flag *N"

------------------------------------------------------------------------
flag *D: # dé: défaire, dégrossir
. > dé

flag *N: # élision d'une négation
[aàâeèéêiîoôuh] > n' # je n'aime pas, il n'y a pas
------------------------------------------------------------------------

Maybe apostrophes in ispell affix files are simply not supported? I
can't find a mention of this limitation in the documentation at
http://www.postgresql.org/docs/8.3/static/textsearch-dictionaries.html

When commenting out the offending flag definitions, the affix file
loads successfully. Thanks in advance for helping me resolve this
problem.
--
Jean-Baptiste Quenot
http://jbq.caraldi.com/

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Fw: Re: [GENERAL] C-procedure crashed in Postgres 8.3.3 when using 'text'variable (WinXP) - additional

Yes, I use a Microsoft Visual Studio compiler, but the full version - not Express Edition. Just in case I quote the code again ( it's short):

----------------------------------------------------------------------
include "postgres.h"
#include "fmgr.h"
#include "executor/executor.h"
#include "utils/timestamp.h"
#include "utils/builtins.h"
#include "utils/formatting.h"

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;

Re: [pgadmin-hackers] Dialogs review

On Sat, Jul 19, 2008 at 3:50 PM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:

> Anyways, I attach the new dlgCast.xrc patch file.

That looks pretty sweet, and works well :-). The only thing I note, is
that even with a bigger gap between controls, the bottom highlight on
the active control (the blue border on OS X) still isn't shown. If
that's the case for you as well, then it seems pointless leaving such
a large gap. Reducing the borders from 8 to 4 looks better here.

Thoughts? Aside from that, I think that one is ready to go - and
should set the standard for the following dialogues :-)

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [BUGS] winsock error 10004

Le Monday 21 July 2008, Craig Ringer a écrit :
> Cédric Villemain wrote:
> > Hi,
> >
> > I get the following error with postgresql 8.2.7 on a "Microsoft windows
> > server 2003 R2 Standard Edition Service pack 1" :
> >
> > « LOG: could not rename temporary statistics file "global/pgstat.tmp"
> > to "global/pgstat.stat": Unknown winsock error 10004 »
> >
> > It happens frequently, without using SSL (I have seen some post relative
> > to SSL with windows).
> >
> > Any idea ?
>
> The usual question with weird Windows errors: Do you have a virus
> scanner? If so, have you tried completely disabling it or uninstalling
> it (NOT just excluding PostgreSQL) and re-tested?

Unfortunely, there are no virus scanner or things like that in the server.

>
> --
> Craig Ringer

--
Cédric Villemain
Administrateur de Base de Données
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org

Re: [pgadmin-hackers] Dialogs review

On Sat, Jul 19, 2008 at 3:50 PM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:
> Okay... the wxListCtrl resize problem seems to be a wxMac confirmed bug. See
> http://trac.wxwidgets.org/ticket/4814 bug report for more details. Not sure
> about what we should do with this... debug the stuff on wxMac source files?
> or simply put it in the BUGS file and continue the work ?

I was trying to come up with a test case for this using the xrc sample
app in wxWidgets - however it seems to work fine. I used the changes
below - can you see what's different?:

*** controls.xrc.orig 2008-07-21 10:04:12.000000000 +0100
--- controls.xrc 2008-07-21 10:16:49.000000000 +0100
***************
*** 591,606 ****
<object class="wxPanel" name="listctrl">
<object class="wxFlexGridSizer">
<cols>1</cols>
! <rows>0</rows>
<vgap>0</vgap>
<hgap>0</hgap>
<growablecols>0</growablecols>
<growablerows>0</growablerows>
<object class="sizeritem">
! <flag>wxALIGN_CENTER|wxALL</flag>
<border>5</border>
<object class="wxListCtrl"
name="controls_listctrl">
- <size>220,160</size>
<style>wxLC_REPORT|wxSUNKEN_BORDER</style>
</object>
</object>
--- 591,605 ----
<object class="wxPanel" name="listctrl">
<object class="wxFlexGridSizer">
<cols>1</cols>
! <rows>1</rows>
<vgap>0</vgap>
<hgap>0</hgap>
<growablecols>0</growablecols>
<growablerows>0</growablerows>
<object class="sizeritem">
!
<flag>wxGROW|wxEXPAND|wxTOP|wxLEFT|wxRIGHT</flag>
<border>5</border>
<object class="wxListCtrl"
name="controls_listctrl">
<style>wxLC_REPORT|wxSUNKEN_BORDER</style>
</object>
</object>

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [GENERAL] C-procedure crashed in Postgres 8.3.3 when using 'text' variable (WinXP) - additional

el dorado wrote:
>
> Hello.
> Thank you very much for your answer.

> 1>d:\pgsql83\getstring\c_getstring.c(10) : warning C4273: 'Pg_magic_func' : inconsistent dll linkage
> 1> d:\pgsql83\getstring\c_getstring.c(10) : see previous definition of 'Pg_magic_func'

This should be a big red flag to you. It suggests that you have issues
with __declspec(dllimport) and __declspec(dllexport) annotations.

It looks like you are using a Microsoft Visual Studio compiler. If you
are using Visual Studio 2005 Express Edition you must modify the
project's preprocessor definitions to set the WIN32 macro, as 2005
Express Edition does not do so by default. See earlier discussion in the
archives from a few months ago.

Alternately, modify the definition of the default win32 project to set
the right macros, then create a new project.

This issue does NOT affect the full version, only the Express edition.
It also does not affect 2008 Express Edition. I've just reproduced it on
a clean install of Windows and the Express Edition.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

Hi Stephane,

On 7/21/08 1:53 AM, "Stephane Bailliez" <sbailliez@gmail.com> wrote:

>> I'd suggest RAID5, or even better, configure all eight disks as a JBOD
>> in the RAID adapter and run ZFS RAIDZ. You would then expect to get
>> about 7 x 80 = 560 MB/s on your single query.
>>
> Do you have a particular controller and disk hardware configuration in
> mind when you're suggesting RAID5 ?
> My understanding was it was more difficult to find the right hardware to
> get performance on RAID5 compared to RAID10.

If you're running RAIDZ on ZFS, the controller you have should be fine.
Just configure the HW RAID controller to treat the disks as JBOD (eight
individual disks), then make a single RAIDZ zpool of the eight disks. This
will run them in a robust SW RAID within Solaris. The fault management is
superior to what you would otherwise have in your HW RAID and the
performance should be much better.

- Luke


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

Luke Lonergan wrote:
>
> pgbench is unrelated to the workload you are concerned with if ETL/ELT
> and decision support / data warehousing queries are your target.
>
> Also - placing the xlog on dedicated disks is mostly irrelevant to
> data warehouse / decision support work or ELT. If you need to
> maximize loading speed while concurrent queries are running, it may be
> necessary, but I think you'll be limited in load speed by CPU related
> to data formatting anyway.
>
Indeed. pgbench was mostly done as 'informative' and not really relevant
to the future workload of this db. (given the queries it's doing not
sure it's relevant for anything but connections speed,
interesting for me to get reference for tx like however). I was more
interested in the raw disk performance.

>
> The primary performance driver for ELT / DW is sequential transfer
> rate, thus the dd test at 2X memory. With six data disks of this
> type, you should expect a maximum of around 6 x 80 = 480 MB/s. With
> RAID10, depending on the raid adapter, you may need to have two or
> more IO streams to use all platters, otherwise your max speed for one
> query would be 1/2 that, or 240 MB/s.
>
ok, which seems to be in par with what I'm getting. (the 240 that is)

>
> I'd suggest RAID5, or even better, configure all eight disks as a JBOD
> in the RAID adapter and run ZFS RAIDZ. You would then expect to get
> about 7 x 80 = 560 MB/s on your single query.
>
Do you have a particular controller and disk hardware configuration in
mind when you're suggesting RAID5 ?
My understanding was it was more difficult to find the right hardware to
get performance on RAID5 compared to RAID10.

>
> That said, your single cpu on one query will only be able to scan that
> data at about 300 MB/s (try running a SELECT COUNT(*) against a table
> that is 2X memory size).
>
Note quite 2x memory size, but ~26GB (accounts with scaling factor 2000):

$ time psql -c "select count(*) from accounts" pgbench
count
-----------
200000000
(1 row)

real 1m52.050s
user 0m0.020s
sys 0m0.020s


NB: For the sake of completness, reran the pgbench by taking average of
10 runs for each scaling factor (same configuration as per initial mail,
columns are scaling factor, db size, average tps)

1 20 23451
100 1565 21898
200 3127 20474
300 4688 20003
400 6249 20637
500 7810 16434
600 9372 15114
700 11000 14595
800 12000 16090
900 14000 14894
1000 15000 3071
1200 18000 3382
1400 21000 1888
1600 24000 1515
1800 27000 1435
2000 30000 1354

-- stephane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[HACKERS] Default of max_stack_depth and getrlimit

In 8.2, we started to use getrlimit(RLIMIT_STACK, ...) to set
max_stack_depth automatically, to a max of 2MB:

http://archives.postgresql.org/pgsql-hackers/2006-10/msg00389.php

However, it's not behaving as I expected when the stack limit is set to
"unlimited". I would expect max_stack_depth to be set to the max of 2MB,
but instead it gets set to 100kB.

I don't normally run without a limit, but it looks like the regression
tests run like that with "make check", at least on my platform. I bumped
into this while running a custom regression test with very deep nesting.

I think we should differentiate between "infinite" and "unknown" in the
return value of get_stack_depth_limit(), and use max_stack_depth of 2MB
in case of infinite, and fall back to the 100kB only in the unknown case.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] winsock error 10004

Cédric Villemain wrote:
> Hi,
>
> I get the following error with postgresql 8.2.7 on a "Microsoft windows server
> 2003 R2 Standard Edition Service pack 1" :
>
> « LOG: could not rename temporary statistics file "global/pgstat.tmp"
> to "global/pgstat.stat": Unknown winsock error 10004 »
>
> It happens frequently, without using SSL (I have seen some post relative to
> SSL with windows).
>
> Any idea ?

The usual question with weird Windows errors: Do you have a virus
scanner? If so, have you tried completely disabling it or uninstalling
it (NOT just excluding PostgreSQL) and re-tested?

--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] Postgres-R: primary key patches

Hi,

Alvaro Herrera wrote:
> Markus Wanner wrote:
>> (Although, I'm still less than thrilled about the internal storage
>> format of these tuple collections. That can certainly be improved and
>> simplified.)
>
> Care to expand more on what it is?

Well, what I really dislike is the overhead in code to first transform
tuples into a string based internal change set representation, which
then gets serialized again. That looks like two conversion steps, which
are both prone to error.

I'm about to merge those into a simpler tuple serializer, which shares
code with the initializer (or recovery provider/subscriber) part. This
is where I'd like to know what requirements Jan or others have. I will
try to outline the current implementation and requirements of Postgres-R
in a new thread soon.

> On Replicator we're using the binary
> send/recv routines to transmit tuples. (Obviously this fails when the
> master and slave have differing binary output, but currently we just
> punt on this point).

Yeah, that's another point. I'm currently using the textual input/output
functions, but would like to switch to the binary one as well. However,
that's an optimization, where the above is simplification of code, which
is more important to me at the moment.

Regards

Markus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers