Tuesday, July 22, 2008

Re: [INTERFACES] PQstatus is getting blocked when database sytem powered off

Thanks for your reply.
You are right PQstatus is not blocking, query-issuing call is getting
blocked.
Is there any way for handling this blocking situation?

Regards
Sk
----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: "Sunil Khatri" <sunil@6dtech.co.in>
Cc: <pgsql-interfaces@postgresql.org>
Sent: Tuesday, July 22, 2008 10:58 PM
Subject: Re: [INTERFACES] PQstatus is getting blocked when database sytem
powered off


> "Sunil Khatri" <sunil@6dtech.co.in> writes:
>> Problem is : PQstatus is getting blocked If the system on which database
>> server is running is powered off.
>
> Hardly likely, considering that PQstatus is just
>
> ConnStatusType
> PQstatus(const PGconn *conn)
> {
> if (!conn)
> return CONNECTION_BAD;
> return conn->status;
> }
>
>
> I have no doubt that a query-issuing call could get blocked if the
> postmaster stops responding, but not PQstatus.
>
> regards, tom lane
>
> --
> Sent via pgsql-interfaces mailing list (pgsql-interfaces@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-interfaces
>
>
> --
> No virus found in this incoming message.
> Checked by AVG.
> Version: 7.5.524 / Virus Database: 270.5.3/1565 - Release Date: 7/21/2008
> 6:36 PM
>
>


--
Sent via pgsql-interfaces mailing list (pgsql-interfaces@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-interfaces

Re: [HACKERS] Transaction-controlled robustness for replication

Hi,

Jens-Wolfhard Schicke wrote:
>> * Does WAL get forced to disk on primary at commit time?
>> * Does WAL get forced across link to standby at commit time?
>> * Does WAL get forced to disk on standby at commit time?
> * Does WAL get applied [and synced] to disk on standby at commit time?

I think that's what Simon means by his question no 3. It wouldn't make
much sense to me otherwise.

I'm assuming the standby node has it's own physical format, so the
changes from the remote WAL need to be transformed to a local WAL, which
then needs to be written to disc. For Postgres, this pretty much means
applying the changes and committing them. You never need to store the
remote WAL on physical storage, what would that be good for?

> I think that questions 2 and 3 are trivially bundled together. Once the
> user can specify 2, implementing 3 should be trivial and vice versa.

That might well be, yes. The code to collect changes from a transaction
and then apply them remotely is pretty much the same, no matter when it
is being executed. But it certainly makes a difference in the balance
between performance and availability, which is a decision the user
should be able to make for his specific application (or even better, per
transaction, as proposed here and in Postgres-R).

> I am not even convinced that these need to be two different parameters.

Consider a standby heavily loaded (i/o) with some OLAP queries. Why
should the master wait until the standby has written anything to disk
for him?

> Also please note that an answer of "yes" to 3 means that 2 must also
> be answered "yes".

Agreed. There's no 'AS' mode possible, only 'SS', 'SA' and 'SS'.

> How about creating named modes? This would give the user the ability to
> define more fine-grained control especially in larger clusters of fail-over/read-only
> servers without totally clogging the parameter space and application code.
> Whether this should be done SQL-style or in some config file is not so clear to me,
> although I'd prefer SQL-style like
>
> CREATE SYNCHRONIZING MODE immediate_readonly AS
> LOCAL SYNCHRONOUS APPLY
> 192.168.0.10 SYNCHRONOUS APPLY -- read-only slave
> 192.168.0.11 SYNCHRONOUS APPLY -- read-only slave
> 192.168.0.20 SYNCHRONOUS SHIP -- backup-server
> 192.168.0.21 SYNCHRONOUS SHIP -- backup-server
> 192.168.0.30 SYNHCRONOUS FSYNC -- backup-server with fast disks
> ;

Hm.. that's an interesting idea. Especially considering the number of
options that arise with more than two or three nodes, where you maybe
also want to specify how many nodes must have written the changes to
disk before confirming the commit.

In Postgres-R, I've added a TRANSACTION REPLICATION LEVEL, which can be
either SYNC, EAGER or LAZY. Maybe that's not quite sufficient. On the
other hand, I don't think any other option here makes any sense. (Above,
you yourself doubt that sync is different enough from eager).

Regards

Markus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Substitute a variable in PL/PGSQL.

You can do it in straight sql like so.

SELECT (array[col1, col2, col3, col4, col5, col6, col7, col8, col9, col10])[i]
FROM test t, generate_series(1,10) i

Art

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Postgres-R: tuple serialization

Hi,

Decibel! wrote:
> Currently, londiste triggers are per-row, not deferred. IIRC, londiste
> is the same. ISTM it'd be much better if we had per-statement triggers
> that could see what data had changed; that'd likely be a lot more
> efficient than doing stuff per-row.

Well, now that I think about it... there might be *lots* of changes.
Certainly something you don't want to collect in memory. At the moment,
Postgres-R cannot handle this, but I plan to add an upper limit on the
change set size, and just send it out as soon as it exceeds that limit,
then continue collecting. (Note for the GCS adept: this partial change
set may be sent via reliable multicast, only the very last change set
before the commit needs to be totally ordered.)

That would mean, introducing another 'change set full' hook...

> In any case, both replication systems should work with either sync or
> eager. I can't see them working with lazy.

Huh? AFAIK, londiste as well as Slony-I are both async. So what would
hooks for sync replication be good for? Why not rather only lazy hooks?

(Well, lazy hooks will pose yet another problem: those theoretically
need to run somewhen *after* the commit, but at that time we don't have
an open transaction, so where exactly shall we do this?)

> What about just making all three available?

Doh. Ehm. That's a lot of work for something we are not even sure it's
good for anything. I'm certainly willing to help. And if other projects
show enough interest, I might even add the appropriate triggers myself.
But as long as this is all unclear, I certainly have more important
things on my todo list for Postgres-R (see that TODO list ;-) ).

> Yeah. I suspect that Postgres-R could end up taking the place of the
> replica-hooks mailing list (and more, of course).

Let's hope so, yeah!

Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

On Wed, 2008-07-23 at 00:01 -0400, Tom Lane wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
> > On Tue, 2008-07-22 at 23:29 -0400, Tom Lane wrote:
> >> There's a limit to how far you can go there, because just about any
> >> distro (other than maybe Gentoo) is going to be resistant to dropping in
> >> bleeding-edge versions.

I actually think we are talking past each other. I know how distros
work, all to well frankly. Our repos would be unofficial in the Redhat
eye. My point is, the Red Hat eye is irrelevant for a project like this.
Those who are going to confine themselves to that ideal are a lost cause
(for this project). They will run ancient versions of PostgreSQL and
that's cool because they feel they can trust it.

On the other hand, those who need 8.3 (on RHEL4 for example) can get it,
now -- without breaking compatibility and with RPM.

Sincerely,


Joshua D. Drake


>
> regards, tom lane
>
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] como lograr campo consecutivo sin fallar ?

> Claro, poniendo la lógica en cuestión bajo
> siguiente_factura(), y
> creándole un bonito índice para el rendimiento. Sí, sé
> que el número
> de factura no _debe_ cambiar, pero... Bueno, todos los
> atributos que
> un humano pueda tocar son malas opciones para una llave
> primaria. Y no
> sé cómo sea en sus países, pero en México tú no puedes
> imprimir el
> número de factura - tiene que venir pre-impreso por un
> impresor
> autorizado. Eso sólo significa que el número de factura
> se tiene que
> capturar una vez teniendo el papelito impreso (o listo en
> la
> impresora).
>
> Saludos,

Gunnar en mi pais hay un sistema muy bueno para impositiva, la imprenta tiene que registrar los formularios que entrega y con un numero consecutivo que va del 0 al 999999 con una letra de serie al principio, si se llega al tope numerico la serie cambia.

En las facturas puede haber impresos Remitos, Facturas, Contados, Notas de Credito, Bonificacion etc etc etc, Cada uno de esto puede tener una serie y numeracion propia que lo controla la computadora, hay que ver la secuencia de impresion, cosa que logro ordenando por el ID, del cabezal de la factura, tal como tu lo propones, pero llevo una tabla y una consulta para la numeracion de cada tipo de documento, ahi tomo la propuesta de Alvaro que es el bloqueo, no demora nada y eso que tengo muchas transacciones en el dia con hasta 5 terminales simultaneas facturando.

Al cerrar el mes, paso a Open Office todo ordenado por la secuencia, y en una columna en la primera fila, pongo el numero de imprenta y le sumo uno sucesivamente, entonces puedo saber por el id, que tipo de documento y numero corresponde al numero de imprenta, cosa que impositiva exige.

Bien te cuento esto para que veas un uso practico de las dos cosas, entonces en el cabezal de los documentos tengo id_cab,id_tipo_doc,serie,numero,fecha,id_empresa, asi tengo la clave principal por id_cab y una clave unica por id_tipo_doc,serie,numero,id_empresa.

Atte.
Gabriel Colina

____________________________________________________________________________________
Yahoo! MTV Blog & Rock &gt;¡Cuéntanos tu historia, inspira una canción y gánate un viaje a los Premios MTV! Participa aquí http://mtvla.yahoo.com/
--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
(envía "unregister TuDirecciónDeCorreo" a majordomo@postgresql.org)

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

"Joshua D. Drake" <jd@commandprompt.com> writes:
> On Tue, 2008-07-22 at 23:29 -0400, Tom Lane wrote:
>> There's a limit to how far you can go there, because just about any
>> distro (other than maybe Gentoo) is going to be resistant to dropping in
>> bleeding-edge versions.

> We could have a quality committee? Something that says, "These 5
> packages are considered stable by PGDG". Those go into the various
> repositories whether published directly to STABLE (or equiv) or are put
> into something like Universe.

I don't think you got the point: such pronouncements would have exactly
zero influence on Red Hat, or any other distro I'm familiar with.
The *assumption* is that upstream thinks their new release is stable,
else they wouldn't have made it. The distros are in the business of
not believing that, until more proof emerges --- preferably from their
own testing.

I know that this is the mind-set at Red Hat, and I'm pretty sure
SUSE and Debian work the same way.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

On Tue, 2008-07-22 at 23:29 -0400, Tom Lane wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
> > On Tue, 2008-07-22 at 17:54 -0400, Tom Lane wrote:
> >> I believe Devrim already has a yum repository up and running for
> >> RPM-based distros, though I'm not sure he's got anything but the core
> >> packages in it (yet).
>
> > Well that was certainly part of my point. We have
> > http://www.pgsqlrpms.org/
> > ...
> > E.g; in short let's work with respective projects to get these as part
> > of the repositories.
>
> There's a limit to how far you can go there, because just about any
> distro (other than maybe Gentoo) is going to be resistant to dropping in
> bleeding-edge versions.

Certainly.

> *Especially* code that's not 99.44%+ compatible
> with what's in their current releases. To take the example I'm most
> closely familiar with: sure I can put the latest and greatest into
> Fedora rawhide, but that has approximately zip to do with what people
> are running in the field.

We could have a quality committee? Something that says, "These 5
packages are considered stable by PGDG". Those go into the various
repositories whether published directly to STABLE (or equiv) or are put
into something like Universe.

> So I think the real-world situation is that we have to make stuff
> available to people who want to run something newer/different from what
> their chosen distro ships. That means providing our own repo.
>

Yes that is what pgsqlrpms is.


> Certainly I've got no problem with pushing stuff to the official distros
> as fast as we can, but you've got to realize that that's gonna be a slow
> process, and necessarily always out of date for any distro version that
> is actually popular in the field.

I should note that my point is about using proper package formats first,
working with distros second. I am under no illusion that we will likely
have to have our own repos (which is one of the reasons we have
pgsqlrpms). The good news is, we have the beginnings of this already for
at least three major distros.

It should be relatively trivial to work with macports, fink and freebsd.
I am sure the Open Solaris group would be more than happy to as well.

Sincerely,

Joshua D. Drake


--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH]-hash index improving

> -----Original Message-----
> From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-
> owner@postgresql.org] On Behalf Of Xiao Meng
> Sent: Tuesday, July 22, 2008 7:57 PM
> To: Simon Riggs
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] [PATCH]-hash index improving
>
> Well, I'll do it after I finish my second patch.
> Hash index should be more efficient than btree when N is big enough.
> It seems meaningful to find how big N is in an experiment way.

The savings will depend on many factors. Another thing (besides volume) which is important is the sort of key data being indexed.

Consider a unique key on six varchar(40) fields:

1. Country
2. State/Province
3. City
4. Division
5. Branch
6. Office

Typically, a key such as this will have lots of clumps of similar data, only being differentiated with the final segment. This sort of index is often used for reporting purposes. To determine a unique entry, it is not unlikely that more than 200 characters will be traversed. A hash index gets a special boost here because a much smaller data signature is stored. Even a 64 bit hash occupies only 8 bytes. On the other hand, consider an index on a field consisting of a single character. Here, the pages of the b-tree will have a huge volume of entries per page, requiring fewer pages to search, and the hash index is many times larger and hence more pages will have to be loaded.

These things make a hash index desirable:
1. Unique index
2. Long keys
3. Huge data cardinality
4. Equality search

These things make a hash index undesirable:
1. Non-unique index
2. Short keys
3. Small data sets

These things render a hash index as worthless (except in COMBINATION with a b-tree type index):
1. Need for range searches like BETWEEN
2. Need for ORDER BY on the column(s)

As an aside:
I guess it will also be nice if you can CLUSTER both index and data values on the hash. It may need a different algorithm than a b-tree clustering concept. I know that Rdb uses different kinds of storage areas for hashed indexes verses b-tree indexes.

This effort to create hashed indexes is very valuable. Because it becomes more and more dominant as the data scales up, right at the point when things get tougher is when it becomes the most helpful. If you have a tiny table, it does not even matter if you index it, because (for instance) 10 rows will probably always stay in memory and iteration will find what is needed instantly. But if you have hundreds of millions of rows or billions of rows, now is when performance really matters. So when the data scales to preposterous size (which it has an uncanny ability to do) the boost of performance becomes even more valuable.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

"Joshua D. Drake" <jd@commandprompt.com> writes:
> On Tue, 2008-07-22 at 17:54 -0400, Tom Lane wrote:
>> I believe Devrim already has a yum repository up and running for
>> RPM-based distros, though I'm not sure he's got anything but the core
>> packages in it (yet).

> Well that was certainly part of my point. We have
> http://www.pgsqlrpms.org/
> ...
> E.g; in short let's work with respective projects to get these as part
> of the repositories.

There's a limit to how far you can go there, because just about any
distro (other than maybe Gentoo) is going to be resistant to dropping in
bleeding-edge versions. *Especially* code that's not 99.44%+ compatible
with what's in their current releases. To take the example I'm most
closely familiar with: sure I can put the latest and greatest into
Fedora rawhide, but that has approximately zip to do with what people
are running in the field. As soon as a Fedora release happens, I'm
constrained by compatibility issues as to what I can put into that
branch. RHEL releases ten times more so. I gather that Debian, for
instance, is even more paranoid than Red Hat about upstream version
bumps.

So I think the real-world situation is that we have to make stuff
available to people who want to run something newer/different from what
their chosen distro ships. That means providing our own repo.

Certainly I've got no problem with pushing stuff to the official distros
as fast as we can, but you've got to realize that that's gonna be a slow
process, and necessarily always out of date for any distro version that
is actually popular in the field.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

On Tue, 2008-07-22 at 17:54 -0400, Tom Lane wrote:
>
> > apt-get install postgresql-plproxy
> > portinstall (I think that is the command) postgresql-plproxy
>
> I believe Devrim already has a yum repository up and running for
> RPM-based distros, though I'm not sure he's got anything but the core
> packages in it (yet).

I have about 50 packages there, and I do package many pgfoundry
projects, like plproxy, pgsphere, pgpool, orafce, plpgpsm, table_log,
etc.
--
Devrim GÜNDÜZ
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [GENERAL] pg_query transaction: auto rollback? begin or start?? commit or end???

Bill Wordsworth wrote:
> ...resending, email didn't go through.
>
> On Tue, Jul 22, 2008 at 1:02 PM, Bill Wordsworth
> <bill.wordsworth@gmail.com> wrote:
>> Is this recommended?
>>
>> pg_query("begin transaction read write;", $connection);
>> if(pg_transaction_status($connection) == 2) {
>> pg_query("insert...;", $connection);
>> pg_query("insert...;", $connection);
>> pg_query("insert...;", $connection);
>> }
>> pg_query("commit transaction;", $connection);
>> pg_close($connection);
>>
>> Now *any* error inside transaction will trigger auto rollback for
>> *all* inserts so I don't need to explicitly issue conditional
>> rollback? Also is "begin/commit transaction" == "start/end
>> transaction"??

What if something gets an invalid state (eg you expect a record to have
'active = 156' but it's something else).

So in some cases yes you'll need to do a rollback. On the other hand, if
you don't explicitly do a commit, everything is rolled back.

Yes "begin" == "start transaction" and "commit" == "end transaction".

--
Postgresql & php tutorials
http://www.designmagick.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Problems Restarting PostgreSQL Daemon

On Tue, 2008-07-22 at 18:05 -0700, Rich Shepard wrote:
> On Wed, 23 Jul 2008, Andrej Ricnik-Bay wrote:
>
> > Now there's an interesting piece of information :) How long
> > ago did you upgrade it?
>

> ... something broke during the reboot. From /var/log/postgresql:
>
> FATAL: database files are incompatible with server
> DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 812,
> but the server was compiled with PG_CONTROL_VERSION 833.
> HINT: It looks like you need to initdb.
>
> I still have the old pgsql (8.1.13) still in a non-standard directory. I
> had run initdb after cleaning up the upgrade. Should I do so again?

It looks to me like your init script just isn't pointing to the 8.3.3
data directory. If you are unsure you can do this:

find / -name PG_VERSION

You likely have 2 or 3 of them. Find the one that says 8.3 and make sure
your start up script points there.

Joshua D. Drake

>
> Thanks,
>
> Rich
>
> --
> Richard B. Shepard, Ph.D. | Integrity Credibility
> Applied Ecosystem Services, Inc. | Innovation
> <http://www.appl-ecosys.com> Voice: 503-667-4517 Fax: 503-667-8863
>
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-jobs] Senior Software Engineer/Architect, San Diego, CA, USA

Company: Cymer, Inc
URL: http://www.cymer.com
Location: San Diego, CA, USA
Position: Regular Full Time

Cymer, Inc. is the world's leading supplier of deep ultraviolet (DUV)
laser illumination sources, the essential light source for DUV
photolithography systems. DUV lithography is a key enabling technology,
which has allowed the semiconductor industry to meet the exacting
specifications and manufacturing requirements for volume production of
today's advanced semiconductor chips.

Cymer has been selected to engineer the world's first extreme
ultraviolet (EUV) source designed to enable next-generation high-volume
manufacturing scanners well into the next decade.

Here's your chance to be part of an elite team with lasting global
impact. Cymer is seeking a very experienced, senior level, Software
Engineer/Architect. If you have a Computer Science or Engineering degree
and experience in software design and development using one or more of
the following technologies, we would like to talk to you:

Required (expert):
---------------------
C++
Linux/Unix

Required (competent):
---------------------
PostgreSQL
Java
Python

Desired:
---------------------
Embedded real-time operating systems
Firmware
Control algorithms
SECS/GEM

If you qualify and are interested in discussing this opportunity, please
contact me directly with a resume at the email address below.

Joe Conway
mail@joeconway.com


--
Sent via pgsql-jobs mailing list (pgsql-jobs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jobs

Re: [pgsql-es-ayuda] como lograr campo consecutivo sin fallar ?

en nuestro caso..  hemos logrado controlar las secuencias... verificamos si la insercion se ha hecho correctamente entonces actualizamos el numero de documento con la siguiente secuencia caso contrario no se hace nada... en el caso de las facturas pre-impresas, cada ventanilla tiene su propia serie de facturas de tal manera que siempre coincide la factura q tengo en impresora con lo que se esta emitiendo en el sistema.. . el procedimiento que decide la actualizacion o no de la secuencias lo tenemos en php, desde aqui se hace el llamado al proceso de secuencias de postgres con transacciones... hay otros problemas como atazco de papel en la impresora, corte de energia en el instante que se imprime la factura.. pero eso es otro tema.

2008/7/22 Javier Aquino H. <JAquino@lexuseditores.com>:
-----Mensaje original-----
De: pgsql-es-ayuda-owner@postgresql.org
[mailto:pgsql-es-ayuda-owner@postgresql.org] En nombre de Roberto Rodríguez
Pino
Enviado el: jueves, 17 de julio de 2008 11:47
Para: pgsql-es-ayuda@postgresql.org
Asunto: Re: [pgsql-es-ayuda] como lograr campo consecutivo sin fallar ?

2008/7/17 Robert J. Briones C. <robert.briones@gmail.com>:
> El día 16 de julio de 2008 23:24, Carlos Mendez <lucas1850@gmail.com>
escribió:
>> Hola, que tal
>>
>> como puedo lograr que un campo de una tabla aparezcan numeros
>> consecutivos sin fallar, 1, 2, 3, 4, 5, etc. con serial a veces falla
>> porque a veces hay errores y se salta del 2 al 4 y el 3  se pierde,
>> yo creo que una forma seria calcular el max de este campo y sumarle
>> +1 y asi sucesivamente, lo pregunto porque tengo que hacer un
>> programa de facturacion y no debe perder ni un solo consecutivo no
>> debe haber saltos o perdidas de numeros, Agradeciendo de antemano
>> cualquier ayuda o sugerencia, me despido, saludos.
>>
>
> Muchas veces se pierden los numeros por una cosa de eliminacion de datos.
> la forma que dices tu . es la que mayormente se ocupa, pero una vez
> creado el 1, 2, 3, 4, 5 y eliminas el tres, te quedará como, 1,2,4,5 y
> el próximo que se agregará será el 6, en el caso de un sistema de
> facturacion, las facturas no se eliminas, solo se anulan, así el 3 en
> este caso . seguira ahí, pero como anulado.
>
> yo nunca he tenido problemas de que se salte un numero.
> revisa el script que ocupas.
>

> Yo tengo un problema similar. Se salta el correlativo en caso de que haya
> habido un fallo en la inserción.
> Yo estoy ocupando jdbc. Necesitaba hacer 2 inserciones, donde la segunda
> dependia de la primera... pero si habia algun problema en la segunda, la
> primera no debia llevarse a cabo. Para lo cual aplicaba un rollback.
> Pero asi y todo se seguian "quemando"  los identificadores de la
secuencia.
> Despues revise la cantidad de numeros disponbles y era una suma
> considerable, asi que no le di mucha importancia (en mi problema, me
> indiferente que se salte de un 3 a un 6)... pero igual me quedo la duda de
> como se podria solucionar esto.
>¿Se puede de alguna manera?
> Saludos!
> Roberto

Yo tengo una tabla de correlativos asi:

Tabla: t_correlativo
Tipo_doc char(2) PK
Serie_doc number(3) PK
Numero_doc number(8)


Los pasos son sencillos:

1. Primero incrementas el valor del correlativo (numero_doc) que corresponda
al documento y a la serie que estás trabajando.
2. haces un select para traer el correlativo generado (numero_doc)
3. usas este número para tu tabla de movimientos.

Ya que todos estos pasos están dentro de una transacción el registro del
correlativo correspondiente a al documento y serie permanecen bloqueados
mientras no se haga un roolback o commit.

Ojo. El paso 1 y 2 se pueden hacer en un solo update con retorno de valores
y finalmente también se puede meter este proceso dentro de un trigger en la
tabla de movimientos.

Saludos,

Javier.



--
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que está limpio.
For all your IT requirements visit: http://www.transtec.co.uk

--
TIP 5: ¿Has leído nuestro extenso FAQ?
        http://www.postgresql.org/docs/faqs.FAQ.html



--
Felix Gonzales
(c) 074-97-9720762

Re: [PERFORM] Performance of jobs

samantha mahindrakar wrote:
> Windows 2003 Server SP2
> Intel Xeon 3.2 GHz
> 3.75 Gb RAM
> System Drive 33.8 Gb
> Data drive 956 Gb
> PostgreSQL 8.2.6
> PERC RAID 10 I believe SCSI disks

... all of which look fairly reasonable, though you didn't say how many
disks (which is *very* important for performance) or how fast they are.

The concurrent jobs will compete for disk bandwidth, CPU time, and
memory for cache. If a single job already loads your server quite
heavily then a second concurrent job will slow the original job down and
run slower its self. There's not much you can do about that except
schedule the jobs not to run concurrently.

You should probably start by using the system's performance monitoring
tools (whatever they are - I don't know much about that for Windows) to
determine how much RAM, CPU time, and disk I/O each job uses when run
alone. Time how long each job takes when run alone. Then run the jobs
concurrently, time them, and measure how much CPU time and (if possible)
disk I/O each job gets that way. See how much greater the total time
taken is when they're run concurrently and try to see if the jobs appear
to be CPU-limited, disk limited, etc.

If your controller has a battery backed cache, consider turning on write
caching and see if that helps. DO NOT turn write caching on without a
battery backed cache. If performance is important to you, you should
probably get a battery backed cache module anyway.

Also make sure that PostgreSQL has enough shared memory configured. It
might be struggling to fit the data it needs for the jobs in its shared
memory cache when you run them concurrently. See the manual for a more
detailed explanation.

You have not provided any useful information about exactly what you
think is wrong, what the problem jobs are, etc so it is not possible to
give more than very generic explanations. To be specific it'd probably
be necessary to know your data and queries. At minimum some measurements
of query times when run concurrently vs alone would be necessary.

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] Problems Restarting PostgreSQL Daemon

On Wed, 23 Jul 2008, Andrej Ricnik-Bay wrote:

> Now there's an interesting piece of information :) How long
> ago did you upgrade it?

Andrej,

A month ago; June 17th to be exact.

> From which version of pg to which version did you upgrade,

From 8.1.13 to 8.3.3.

> and how did you go about it? Chances are indeed that the postmasters
> logfile (/var/log/postgres) may hold crucial information as Tom suggested.

Well, after digging myself into a hole, I received help here and climbed
out. It was working last week (when I made some entries into my accounting
system and viewed the local version of our web site). However, ...

... something broke during the reboot. From /var/log/postgresql:

FATAL: database files are incompatible with server
DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 812,
but the server was compiled with PG_CONTROL_VERSION 833.
HINT: It looks like you need to initdb.

I still have the old pgsql (8.1.13) still in a non-standard directory. I
had run initdb after cleaning up the upgrade. Should I do so again?

Thanks,

Rich

--
Richard B. Shepard, Ph.D. | Integrity Credibility
Applied Ecosystem Services, Inc. | Innovation
<http://www.appl-ecosys.com> Voice: 503-667-4517 Fax: 503-667-8863

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Transaction-controlled robustness for replication

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Simon Riggs wrote:
> Asynchronous commit controls whether we go to disk at time of commit, or
> whether we defer this slightly. We have the same options with
> replication: do we replicate at time of commit, or do we defer this
> slightly for performance reasons. DRBD and other replication systems
> show us that there is actually another difference when talking about
> synchronous replication: do we go to disk on the standby before
> acknowledging the primary?
>
> We can generalise this as three closed questions, answered either Yes
> (Synchronous) or No (Asynchronous)
>
> * Does WAL get forced to disk on primary at commit time?
> * Does WAL get forced across link to standby at commit time?
> * Does WAL get forced to disk on standby at commit time?
* Does WAL get applied [and synced] to disk on standby at commit time?
This is important if you want to use the standby as a read-only.
I am slightly confused about what the fsync setting does to all this, hence
the brackets.

I think that questions 2 and 3 are trivially bundled together. Once the
user can specify 2, implementing 3 should be trivial and vice versa.
I am not even convinced that these need to be two different parameters.
Also please note that an answer of "yes" to 3 means that 2 must also
be answered "yes".

> We could represent this with 3 parameters:
> synchronous_commit = on | off
> synchronous_standby_transfer = on | off
> synchronous_standby_wal_fsync = on | off
synchronous_standby_apply = on | off # just to propose a name

> Changing the parameter setting at transaction-level would be expensive
> if we had to set three parameters.
What exactly does "expensive" mean? All three parameters can probably be set
in one TCP packet from client to server.

> Or we could use just a single parameter
> synchronous_commit = 'AAA', 'SAA', 'SSA', 'SSS' or on |off when no
> log-based replication is defined
>
> Having the ability to set these at the transaction-level would be very
> cool. Having it set via a *single* parameter would make it much more
> viable to switch between AAA for bulk, low importance data and SSS for
> very low volume, critical data, or somewhere in between on the same
> server, at the same time.
The problem with a single parameter is that everything becomes position
dependent and if whyever a new parameter is introduced, it's not easy to
upgrade old application code.

> So proposal in summary is
> * allow various modes of synchronous replication for perf/robustness
> * allow modes to be specified per-transaction
> * allow modes to be specified as a single parameter
How about creating named modes? This would give the user the ability to
define more fine-grained control especially in larger clusters of fail-over/read-only
servers without totally clogging the parameter space and application code.
Whether this should be done SQL-style or in some config file is not so clear to me,
although I'd prefer SQL-style like

CREATE SYNCHRONIZING MODE immediate_readonly AS
LOCAL SYNCHRONOUS APPLY
192.168.0.10 SYNCHRONOUS APPLY -- read-only slave
192.168.0.11 SYNCHRONOUS APPLY -- read-only slave
192.168.0.20 SYNCHRONOUS SHIP -- backup-server
192.168.0.21 SYNCHRONOUS SHIP -- backup-server
192.168.0.30 SYNHCRONOUS FSYNC -- backup-server with fast disks
;

and then something like

synchronize_mode = immediate_readonly;

Yeah, I know, give patches not pipe-dreams :)

Regards,
Jens-Wolfhard Schicke
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIhoAdzhchXT4RR5ARAo/6AJ9R6LA0TsPvD/TBy6Bh1q0q5JvyKQCbBycx
1CKc8dqxnlvmH/hbi1Px+v8=
=l5P4
-----END PGP SIGNATURE-----

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Problems Restarting PostgreSQL Daemon

On 23/07/2008, Rich Shepard <rshepard@appl-ecosys.com> wrote:

> Andrej,

Hi Rich,

> Unless others consider this topic to be not appropriate for the list, I
> don't mind a public conversation. I thought that I attached the script to
> my original message; regardless, here's the attribution:

You did - my bad. I usually ignore attachments on mailing-lists,
and did so with yours.


> I upgraded postgres manually, not creating and using a Slackware package.
> It worked just fine until yesterday's reboot.

Now there's an interesting piece of information :) How long
ago did you upgrade it?
From which version of pg to which version did you upgrade,
and how did you go about it? Chances are indeed that the
postmasters logfile (/var/log/postgres) may hold crucial
information as Tom suggested.


> Thanks,
>
> Rich

Cheers,
Andrej


--
Please don't top post, and don't use HTML e-Mail :} Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Samsung 32GB SATA SSD tested

On Tue, Jul 22, 2008 at 6:04 PM, Jeffrey W. Baker <jwbaker@gmail.com> wrote:

> Strangely the RAID controller behaves badly on the TPC-B workload. It
> is faster than disk, but not by a lot, and it's much slower than the
> other flash configurations. The read/write benchmark did not vary when
> changing the number of clients between 1 and 8. I suspect this is some
> kind of problem with Areca's kernel driver or firmware.

Are you still using the 2.6.18 kernel for testing, or have you
upgraded to something like 2.6.22. I've heard many good things about
the areca driver in that kernel version.

This sounds like an interesting development I'll have to keep track
of. In a year or two I might be replacing 16 disk arrays with SSD
drives...

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS][PATCHES] odd output in restore mode

Below my comments on the CommitFest patch:
pg_standby minor changes for Windows

Simon, I'm sorry you got me, a Postgres newbie, signed up for
reviewing your patch ;)

To start with, I'm not quite sure of the status of this patch
since Bruce's last comment on the -patches alias:

Bruce Momjian wrote:
> OK, based on these observations I think we need to learn more about the
> issues before making any changes to our code.

From easy to difficult:

1. Issues with applying the patch to CVS HEAD:

The second file in the patch
Index: doc/src/sgml/standby.sgml
appears to be misnamed -- the existing file in HEAD is
Index: doc/src/sgml/pgstandby.sgml

However, still had issues after fixing the file name:

md@Garu:~/pg/pgsql$ patch -c -p0 < ../pg_standby.patch
patching file contrib/pg_standby/pg_standby.c
patching file doc/src/sgml/pgstandby.sgml
Hunk #1 FAILED at 136.
Hunk #2 FAILED at 168.
Hunk #3 FAILED at 245.
Hunk #4 FAILED at 255.
4 out of 4 hunks FAILED -- saving rejects to file doc/src/sgml/pgstandby.sgml.rej


2. Missing description for new command-line options in pgstandby.sgml

Simon Riggs wrote:
> Patch implements
> * recommendation to use GnuWin32 cp on Windows

Saw that in the changes to pgstandby.sgml, and looks ok to me, but:
- no description of the proposed new command-line options -h and -p?


3. No coding style issues seen

Just one comment: the logic that selects the actual restore command to
be used has moved from CustomizableInitialize() to main() -- a matter
of personal taste, perhaps. But in my view the:
+ the #ifdef WIN32/HAVE_WORKING_LINK logic has become easier to read


4. Issue: missing break in switch, silent override of '-l' argument?

This behaviour has been in there before and is not addresses by the
patch: The user-selected Win32 "mklink" command mode is never applied
due to a missing 'break' in CustomizableInitialize():

switch (restoreCommandType)
{
case RESTORE_COMMAND_WIN32_MKLINK:
SET_RESTORE_COMMAND("mklink", WALFilePath, xlogFilePath);
case RESTORE_COMMAND_WIN32_COPY:
SET_RESTORE_COMMAND("copy", WALFilePath, xlogFilePath);
break;

A similar behaviour on Non-Win32 platforms where the user-selected
"ln" may be silently changed to "cp" in main():

#if HAVE_WORKING_LINK
restoreCommandType = RESTORE_COMMAND_LN;
#else
restoreCommandType = RESTORE_COMMAND_CP;

Re: [pgus-general] Small update

On Tue, 2008-07-22 at 16:54 -0700, Josh Berkus wrote:
> Joshua D. Drake wrote:
> > Hello,
> >
> No, you said that wrong. What you meant to say was:
>
> "I've revised the bylaws and will be posting them for the folks on the
> list to check.

No that isn't what I meant to say. The board, not I reviewed all
discussion from the previous review round. We then sent changes based on
that discussion to the attorney for rewrite. We have received those
changes and are currently digesting them. If the board consents we will
file them. If we don't agree, the board will then decide the next course
of action.

>
> Also, I've put up a proposed membership structure for comment. Please
> take a look at it and give feedback."

Do you have a URL? I don't see anything under the resolution itself.

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

[PERFORM] Samsung 32GB SATA SSD tested

For background, please read the thread "Fusion-io ioDrive", archived at

http://archives.postgresql.org/pgsql-performance/2008-07/msg00010.php

To recap, I tested an ioDrive versus a 6-disk RAID with pgbench on an
ordinary PC. I now also have a 32GB Samsung SATA SSD, and I have tested
it in the same machine with the same software and configuration. I
tested it connected to the NVIDIA CK804 SATA controller on the
motherboard, and as a pass-through disk on the Areca RAID controller,
with write-back caching enabled.

Service Time Percentile, millis
R/W TPS R-O TPS 50th 80th 90th 95th
RAID 182 673 18 32 42 64
Fusion 971 4792 8 9 10 11
SSD+NV 442 4399 12 18 36 43
SSD+Areca 252 5937 12 15 17 21

As you can see, there are tradeoffs. The motherboard's ports are
substantially faster on the TPC-B type of workload. This little, cheap
SSD achieves almost half the performance of the ioDrive (i.e. similar
performance to a 50-disk SAS array.) The RAID controller does a better
job on the read-only workload, surpassing the ioDrive by 20%.

Strangely the RAID controller behaves badly on the TPC-B workload. It
is faster than disk, but not by a lot, and it's much slower than the
other flash configurations. The read/write benchmark did not vary when
changing the number of clients between 1 and 8. I suspect this is some
kind of problem with Areca's kernel driver or firmware.

On the bright side, the Samsung+Areca configuration offers excellent
service time distribution, comparable to that achieved by the ioDrive.
Using the motherboard's SATA ports gave service times comparable to the
disk RAID.

The performance is respectable for a $400 device. You get about half
the tps and half the capacity of the ioDrive, but for one fifth the
price and in the much more convenient SATA form factor.

Your faithful investigator,
jwb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgus-board] pictures

Hmm. Happy people in Postgres T-shirts, maybe? (It occurred to me
after I said that that it'd be difficult to show that a person was
happily using PostgreSQL in a simple picture. ;) )

---Michael Brewer
mbrewer@gmail.com

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [pgus-general] Small update

Joshua D. Drake wrote:
> Hello,
>
> Just FYI folks:
>
> The bylaws are in final review, we are hoping for determination on them
> directly after OSCON.
>
> The 1023 (IRS stuff) is due at the end of this week for review.
>
> All of our resolutions are public, you can find them here:

No, you said that wrong. What you meant to say was:

"I've revised the bylaws and will be posting them for the folks on the
list to check.

Also, I've put up a proposed membership structure for comment. Please
take a look at it and give feedback."

Right?

--Josh

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [pgus-board] pictures

On Tue, Jul 22, 2008 at 4:50 PM, Michael Alan Brewer <mbrewer@gmail.com> wrote:
> Hey, Selena; hope your talk is going/has gone well. ;) What kind of
> pictures do you mean? (PostgreSQL graphics? Happy users? JD, with
> the blue thingamabob on his head? ;) )

I think happy users :)

--
Selena Deckelmann
United States PostgreSQL Association - http://www.postgresql.us
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [pgus-board] pictures

Hey, Selena; hope your talk is going/has gone well. ;) What kind of
pictures do you mean? (PostgreSQL graphics? Happy users? JD, with
the blue thingamabob on his head? ;) )

---Michael Brewer
mbrewer@gmail.com

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [pgadmin-hackers] dlgCheck review

Dave Page a écrit :
> On Tue, Jul 22, 2008 at 2:48 PM, Guillaume Lelarge
> <guillaume@lelarge.info> wrote:
>> Dave Page a écrit :
>>> Yup, works well. Should we have the labels top-aligned with the
>>> multi-line textboxes though? I'm not sure.
>>>
>> My opinion : we should have them top-aligned for consistency sake. In the
>> dlgAggregate dialog, second tab, we have the wxListCtrl on the right, and a
>> label and a button on the left side. The label is top aligned, the button
>> should be bottom aligned to sit next to the add button. If this dialog shows
>> a label top-aligned, the others should as well.
>>
>> And I find top-alignement prettier on these dialogs :)
>
> On reflection, I think you're right.
>

Fixed.


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [SQL] PERSISTANT PREPARE (another point of view)

On 20/07/08 22:16, Milan Oparnica wrote:
> Try to write following simple scenario:
>
> a. Data is retrieved from two tables in INNER JOIN
> b. I don't need all fields, but just some of them from both tables
>
> Lets call tables Customers and Orders.
>
> Definition of tables are:
> Customers (CustomID INTEGER, Name TEXT(50), Adress TEXT(100))
> Orders (OrderID INTEGER, CustomID INTEGER, OrderNum TEXT(10))
>
> Now I need a list of order numbers for some customer:
>
> SELECT C.CustomID, C.Name, O.OrderNum
> FROM Customers C INNER JOIN Orders O ON C.CustomID=O.CustomID
> WHERE C.Name LIKE <some input parameter>
>
You can do this with cursors, but I'm not sure if you still get the
query caching?

CREATE FUNCTION test(refcursor, input varchar) RETURNS refcursor AS $$
BEGIN
OPEN $1 FOR SELECT C.CustomID, C.Name, O.OrderNum
FROM Customers C INNER JOIN Orders O ON C.CustomID=O.CustomID
WHERE C.Name LIKE '%' || input || '%';
RETURN $1;
END
$$ LANGUAGE plpgsql;

Then to use:

BEGIN;
SELECT test('curs', <some input parameter>);
FETCH ALL FROM curs;
END;

Jeff

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Plans for 8.4

Simon Riggs <simon@2ndquadrant.com> writes:
> On Tue, 2008-07-22 at 18:37 +0200, Markus Wanner wrote:
>> What is it you
>> are talking about?

> Reconstructing SQL from WAL, allowing logical apply.
> ...
> I'm proposing to put the mechanisms in place to allow WAL to be used by
> other replication projects. Which turn out to be relatively small
> changes to the main server.

[ retrieves eyebrows from ceiling... ] I doubt that's even possible,
let alone a "relatively small change".

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] PATCH: CITEXT 2.0 v4

On Jul 18, 2008, at 01:39, Michael Paesold wrote:

> Calling regex functions with the case-insensitivity option would be
> great. It should also be possible to rewrite replace() into
> regexp_replace() by first escaping the regex meta characters.
>
> Actually re-implementing those functions in a case insensitive way
> would still be an option, but of course some amount of work. The
> question is, how much use case there is.

I've figured out how to make all the functions work using SQL function
workarounds, converting things and re-dispatching to the text versions
as appropriate. They work quite well, and can be converted to C later
if that becomes a requirement.

Meanwhile, on the question of whether or not regular expression and
LIKE comparisons *should* match case-insensitively, I have a couple
more observations:

* Thinking about how a true case-insensitive collation would work, I'm
quite certain that it would match case-insensitively. Anything else
would just be unexpected, because in a case-insensitive collation,
lowercase characters are, in practice, identical to uppercase
characters. As far as matching is concerned, there is no difference
between them. So the matching operators and functions against CITEXT
should follow that assumption.

* I tried a few matches on MySQL, where the collation is case-
insensitive by default, and it confirms my impression:

mysql> select 'Foo' regexp 'o$';
+-------------------+
| 'Foo' regexp 'o$' |
+-------------------+
| 1 |
+-------------------+
1 row in set (0.00 sec)

mysql> select 'Foo' regexp 'O$';
+-------------------+
| 'Foo' regexp 'O$' |
+-------------------+
| 1 |
+-------------------+
1 row in set (0.00 sec)

mysql> select 'Foo' like '%o';
+-----------------+
| 'Foo' like '%o' |
+-----------------+
| 1 |
+-----------------+
1 row in set (0.00 sec)

mysql> select 'Foo' like '%O';
+-----------------+
| 'Foo' like '%O' |
+-----------------+
| 1 |
+-----------------+
1 row in set (0.00 sec)

I'll grant that MySQL may not be the best model for how things should
work, but it's something, at least. Anyone else got access to another
database with case-insensitive collations to see how LIKE and regular
expressions work?

Thanks,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Problems Restarting PostgreSQL Daemon

On Wed, 23 Jul 2008, Andrej Ricnik-Bay wrote:

> Since there are no official Slackware postgres packages I'd like to ask
> where that script came from :) and how you installed postges in the first
> place. Happy to communicate of the list if you prefer that.

Andrej,

Unless others consider this topic to be not appropriate for the list, I
don't mind a public conversation. I thought that I attached the script to my
original message; regardless, here's the attribution:

# PostgreSQL startup script for Slackware Linux
# Copyright 2007 Adis Nezirovic <adis _at_ linux.org.ba>
# Licensed under GNU GPL v2

I upgraded postgres manually, not creating and using a Slackware package.
It worked just fine until yesterday's reboot.

Thanks,

Rich

--
Richard B. Shepard, Ph.D. | Integrity Credibility
Applied Ecosystem Services, Inc. | Innovation
<http://www.appl-ecosys.com> Voice: 503-667-4517 Fax: 503-667-8863

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [SQL] Strange query duration

"Fernando Hevia" <fhevia@ip-tel.com.ar> writes:
> The 'f_tasador' procedure is quite fast. As per log output I can see the
> procedure completes its execution within one second. Nevertheless in the LOG
> duration entry it shows a statement duration of over 36 secs.

It says milliseconds, not seconds.

regards, tom lane

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [PERFORM] Perl/DBI vs Native

On Tue, Jul 22, 2008 at 9:48 AM, Greg Sabino Mullane <greg@turnstep.com> wrote:
>> In case someone is wondering, the way to force DBI to use unix
>> sockets is by not specifying a host and port in the connect call.
>
> Actually, the host defaults to the local socket. Using the port
> may still be needed: if you leave it out, it simply uses the default
> value (5432) if left out. Thus, for most purposes, just leaving
> the host out is enough to cause a socket connection on the default
> port.

For the further illumination of the historical record, the best
practice here is probably to use the pg_service.conf file, which may
or may not live in /etc depending on your operating system. Then you
can connect in DBI using dbi:Pg:service=whatever, and change the
definition of "whatever" in pg_service.conf. This has the same
semantics as PGSERVICE=whatever when using psql. It's a good idea to
keep these connection details out of your program code.

-jwb

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] Optimizing a like-cause

On Tuesday 22 July 2008, Stefan Sturm <stefan.s.sturm@googlemail.com> wrote:
> Hello,
>
> I'm developing a autocomplete Feature using php and PostgreSQL 8.3.
> To fill the autocomplete box I use the following SQL Statement:
> select * from _table_ where upper( _field_ ) like '%STRING%';
>
> This SQL Statement takes 900 ms on a Table with 300.000 entries.
>
> What can I do to speed up the Statement? What Index can I set?

No index can be used for that query (where it starts with %).

--
Alan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [SQL] Strange query duration

On Tue, Jul 22, 2008 at 12:10 PM, Fernando Hevia <fhevia@ip-tel.com.ar> wrote:

>Nevertheless in the LOG
> duration entry it shows a statement duration of over 36 secs.
> ¿What is going on? ¿Where come those 36 seconds from?

> 2008-07-22 15:52:37 ART|postgres| LOG: duration: 38.154 ms

:o) You might be encountering a bit of parallax. This shows both 38
and 36 *milliseconds*.


--
Regards,
Richard Broersma Jr.

Visit the Los Angles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [SQL] Strange query duration

On Tue, 22 Jul 2008, Fernando Hevia wrote:

> I just enabled log duration in a 8.3.1 database and got puzzling
> information.
> I have a daemon shell-script run every 10 seconds the following:
>
> psql -c "select f_tasador();"
>
> The 'f_tasador' procedure is quite fast. As per log output I can see the
> procedure completes its execution within one second. Nevertheless in the LOG
> duration entry it shows a statement duration of over 36 secs.

I think you're misleading the log. It's showing a number of milliseconds
(ms) not seconds.

> 2008-07-22 15:52:37 ART|postgres| LOG: statement: select f_tasador();
> 2008-07-22 15:52:37 ART|postgres| LOG: duration: 38.154 ms
> 2008-07-22 15:52:47 ART|postgres| LOG: statement: select f_tasador();
> 2008-07-22 15:52:47 ART|postgres| LOG: duration: 36.781 ms

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Schema-qualified statements in pg_dump output

On Tue, 2008-07-22 at 13:35 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > No need to specify the name at pg_dump time.
> > For text files, just use an option to specify whether we change the
> > actual schema name and replace it with the text :PGDUMPSCHEMA.
>
> pg_restore is in even worse position than pg_dump to make this happen;
> it would not be able to do anything that's smarter than a sed-like
> substitution.

Somebody just needs to check carefully to see what will work. I accept
there is no easy option that is materially better than sed.

I've screwed up a dump with sed, luckily noticed. I'm not playing
Russian Roulette again. The chance of the schema name being stored
somewhere in the database seems high, on reflection.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] no puedo instalar pgsql me pide que cambie de puerto

Hola, que tal,
 
estoy  tratando de instalar pgsql 8.3.3 en windows xp pero me pide que cambie de puerto, debe ser porque anteriormente hice una instalacion fallida y ahora ese puerto ya esta ocupado, como lo limpio o habilito para usarlo y continuar la instalacio?
el mensaje es:
The port number entered is already in use. please specify a different port.
Gracias de antemano por la ayuda.
saludos.

Re: [pgsql-es-ayuda] problema con dump de una bd

Hola, gracias por la ayuda,
ya esta resuelto el problema era que no escribia bien las path,
saludos

 
2008/7/21, Alvaro Herrera <alvherre@alvh.no-ip.org>:
Carlos Mendez escribió:
> Hola, gracias por las respuestas, esto se esta complicando porque necesito
> hacer un dump de la bd,
> el pg_dump esta en /usr/local/pgsql/bin/
> me pongo en esa carpeta y digito pg_dump y responde:
> bash: pg_dump command not found,

El directorio ("carpeta") sobre el que estas parado no se agrega
automaticamente a la lista donde se buscan los programas a ejecutar (el
PATH).  Puedes invocarlo haciendo ./pg_dump cuando estas parado ahi, o
bien puedes pasarle la ruta completa estando parado en cualquier otro
sitio, /usr/local/pgsql/bin/pg_dump

--
Alvaro Herrera       Valdivia, Chile   ICBM: S 39º 48' 55.3", W 73º 15' 24.7"
"Executive Executive Summary: The [Windows] Vista Content Protection
specification could very well constitute the longest suicide note in history."
     Peter Guttman, http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt

Re: [HACKERS] [WIP] collation support revisited (phase 1)

Martijn van Oosterhout napsal(a):
> On Mon, Jul 21, 2008 at 03:15:56AM +0200, Radek Strnad wrote:
>> I was trying to sort out the problem with not creating new catalog for
>> character sets and I came up following ideas. Correct me if my ideas are
>> wrong.
>>
>> Since collation has to have a defined character set.
>
> Not really. AIUI at least glibc and ICU define a collation over all
> possible characters (ie unicode). When you create a locale you take a
> subset and use that. Think about it: if you want to sort strings and
> one of them happens to contain a chinese charater, it can't *fail*.
> Note strcoll() has no error return for unknown characters.

It has.
See http://www.opengroup.org/onlinepubs/009695399/functions/strcoll.html

The strcoll() function may fail if:

[EINVAL]
[CX] The s1 or s2 arguments contain characters outside the domain of
the collating sequence.


>> I'm suggesting to use
>> already written infrastructure of encodings and to use list of encodings in
>> chklocale.c. Currently databases are not created with specified character
>> set but with specified encoding. I think instead of pointing a record in
>> collation catalog to another record in character set catalog we might use
>> only name (string) of the encoding.
>
> That's reasonable. From an abstract point of view collations and
> encodings are orthoginal, it's only when you're using POSIX locales
> that there are limitations on how you combine them. I think you can
> assume a collation can handle any characters that can be produced by
> encoding.

I think you are not correct. You cannot use collation over all UNICODE. See
http://www.unicode.org/reports/tr10/#Common_Misperceptions. Same characters can
be ordered differently in different languages.

Zdenek

--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] pg_query transaction: auto rollback? begin or start?? commit or end???

Is this recommended?

pg_query("begin transaction read write;", $connection);
if(pg_transaction_status($connection) == 2) {
pg_query("insert...;", $connection);
pg_query("insert...;", $connection);
pg_query("insert...;", $connection);
}
pg_query("commit transaction;", $connection);
pg_close($connection);

Now *any* error inside transaction will trigger auto rollback for
*all* inserts so I don't need to explicitly issue conditional
rollback? Also is "begin/commit transaction" == "start/end
transaction"??
Cheers, Bill

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [lapug] food for this friday's meeting

I will be at the meeting. Probably bringing one or two people with me as
well.

Charles

Geoff Kloess wrote:
> I can provide food for the meeting. The meeting is in Garden Grove
> again this time right? I'll also try and get a better room for a
> meeting than we had last time. Will we need a projector or anything
> else?
>
> Geoff Kloess
> Senior Information Technology Analyst
> City of Garden Grove
> 714-741-5572
> geoffk@ci.garden-grove.ca.us
>
>
>


--
Charles Wyble (818) 280 - 7059
http://charlesnw.blogspot.com
CTO Known Element Enterprises / SoCal WiFI project


--
Sent via lapug mailing list (lapug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/lapug

Re: [BUGS] Re: BUG #4319: lower()/upper() does not know about UNICODE case mapping

Am Tuesday, 22. July 2008 schrieb valgog:
> Why Postgres allows creating UNICODE database with the locale, that
> can possibly corrupt my data?

It doesn't allow it, as of 8.3. In 8.2 it does, but we have fixed that, for
the reasons that are becoming obvious to you now.

Perhaps part of the problem is that en_EN isn't actually a valid locale, as
far as I can tell, unless SUSE has invented a new country. :) Try locale -a
and pick one from that list.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

Am Monday, 21. July 2008 schrieb Tom Lane:
> So my feeling is that we should not accept either of these patches.

My feeling as well.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Using ISpell dictionary - headaches...

Hi everybody.

Well... I have a problem when trying to install and use an ISpell dictionary (the Thai one to be more precise) with the tsearch feature.

What I am trying to do

I have a table containing a "title" field, and I want to fill a "vector" field with the following command:
UPDATE thai_table SET vectors = to_tsvector('thai_utf8', coalesce(title,''));

How I installed the Thai dictionary

I installed the "th_TH.dic" and the "th_TH.aff" files (downloaded from http://wiki.services.openoffice.org/wiki/Dictionaries) in a "/usr/local/share/dicts/ispell/" folder, and I executed the following commands:

SET search_path = public;
BEGIN;

INSERT INTO pg_ts_dict (dict_name, dict_init, dict_initoption, dict_lexize, dict_comment)
VALUES (
        'th_spell_utf8',
        'spell_init(internal)',
        'DictFile="/usr/local/share/dicts/ispell/th_TH.dic",AffFile="/usr/local/share/dicts/ispell/th_TH.aff"',
        'spell_lexize(internal,internal,integer)',
        'Thai ISpell dict utf8 encoding'
    );

INSERT INTO pg_ts_cfg (ts_name, prs_name, locale) VALUES ('thai_utf8', 'default', 'th_TH.utf8');

INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'email', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'url', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'host', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'sfloat', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'version', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'uri', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'file', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'float', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'int', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'uint', '{simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'lword', '{th_spell_utf8,simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'nlword', '{th_spell_utf8,simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'word', '{th_spell_utf8,simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'part_hword', '{th_spell_utf8,simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'nlpart_hword', '{th_spell_utf8,simple}');
INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) VALUES ('thai_utf8', 'lpart_hword', '{th_spell_utf8,simple}');

COMMIT;


What my problem is

The problem is that, when i execute the request to fill my "vectors" field, psql crashes...

la connexion au serveur a été coupée à l'improviste
        Le serveur s'est peut-être arrêté anormalement
        avant ou durant le traitement de la requête.
La connexion au serveur a été perdue. Tentative de réinitialisation: Echec.
!>


(it means: the connection with the server has been cut unexpectedly. The server may have stop abnormaly before or during the request handling. The connection with the server has been lost. Trying to reinitialization: Failed)

I have no idea on what may cause that, nor what I could look for to find idea on how to solve that.

It *may* be because I'm using psql 8.0.3 and not the latest version (but I'm stucked with that version), i'm just hoping that one of you have met similar problem and have successfully solved it, or maybe if you know a site where an Ispell dictionary installation is detailed step by step so that I can check if I did something wrong somewhere...

Many thanks for your attention,
Daniel Chiaramello

[BUGS] Re: BUG #4319: lower()/upper() does not know about UNICODE case mapping

On Jul 21, 4:32 pm, alvhe...@commandprompt.com (Alvaro Herrera) wrote:
> Valentine Gogichashvili escribió:
>
> > On 8.2.3 database with LC_CTYPE set to en_EN, lower() function is actually
> > corrupting UTF8 data, lowering UTF8 control bytes... I did have a chance to
> > check if how it works on the 8.3 as I do not have any db instance with the
> > LC_CTYPE set to en_EN.
>
> I don't know about Unicode case mapping, but en_EN is not a UTF8 locale
> which is why it's corrupting the data.  Try en_EN.utf8 instead.
>
> --
> Alvaro Herrera                                http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>
> --
> Sent via pgsql-bugs mailing list (pgsql-b...@postgresql.org)
> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-bugs

Why Postgres allows creating UNICODE database with the locale, that
can possibly corrupt my data?

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[BUGS] LDAP Authentication with spaces in DN

I’m trying to configure LDAP authentication. Added a line to pg_hba.conf with the fowling config:

 

host    all         all         127.0.0.1/32          ldap  "ldap://ldapserver:389/o=My Organization,C=PT;cn =;,o=My Organization,C=PT"

 

when I try to authenticate I got a fatal error line in pg_log like this

 

LOG:  LDAP login failed for user "cn =user1,o=My" on server "ldapserver": error code 49

FATAL:  LDAP authentication failed for user "user1"

 

 

[BUGS] Re: BUG #4319: lower()/upper() does not know about UNICODE case mapping

On Jul 21, 4:32 pm, alvhe...@commandprompt.com (Alvaro Herrera) wrote:
> Valentine Gogichashvili escribió:
>
> > On 8.2.3 database with LC_CTYPE set to en_EN, lower() function is actually
> > corrupting UTF8 data, lowering UTF8 control bytes... I did have a chance to
> > check if how it works on the 8.3 as I do not have any db instance with the
> > LC_CTYPE set to en_EN.
>
> I don't know about Unicode case mapping, but en_EN is not a UTF8 locale
> which is why it's corrupting the data.  Try en_EN.utf8 instead.
>
> --
> Alvaro Herrera                                http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>
> --
> Sent via pgsql-bugs mailing list (pgsql-b...@postgresql.org)
> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-bugs

Will en_EN.utf8 know about German, Russian, Romanian and Dutch?

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] overlaps performance

Grzegorz Jaśkiewicz <gj@pointblue.com.pl> writes:

> Tom Lane pisze:
>> The reason we don't automatically translate OVERLAPS is that the spec's
>> definition of OVERLAPS is too weird for that to work; in particular
>> it demands a true result for some cases in which one of the four
>> endpoints is NULL, which'd be pretty hard to do with an interval-style
>> index.
>
> shame, I just work on a thing that would benefit from index that could be used
> in OVERLAPS. I don't know psql internals , except for how GiST works, hence my
> question.

Ah, but the transformation given is actually a bit of a red herring. If you
look at the plan it's doing two bitmap index scans which together are actually
effectively doing a full index scan. The benefit comes from applying the full
overlap condition to the index tuples and only scanning the heap for matching
tuples. Presumably this index is much smaller than the table and/or cached in
memory so the random accesses are outweighed by the lower i/o.

This does raise the possibility that we should check for index scan paths if
we have selective enough columns even if the pathkeys aren't a prefix of the
index pathkeys. We would have to do a full index scan but the cost might still
be lower.

I think the reason we don't (aside from it not being at all useful in he past)
is that it would lead to a lot of possible index scans being considered.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] How to Select a Tupl by Nearest Date

am Tue, dem 22.07.2008, um 10:42:56 +0200 mailte Christian Kindler folgendes:
> Hello
>
> Assume I have a table like
> create table foo (
> id serial,
> date foodate,
> primary key(id)
> );
>
> with 2 tupls
> insert into foo(foodate) values('2008-07-07'); --id = 1
> insert into foo(foodate) values('2008-07-04'); -- id = 2
>
> What I need is to select the nearest tupl by a given date and I do not know how to do this.
>
> Something like:
> select id from foo where foo date = nearest('2008-07-06');
> -> should return 1
>
> select id from foo where foo date = nearest('2008-07-05');
> -> should return 2
>
> How can I do this? Note: I have a large Table (> 5'000'000 rows) so a good performing way would be a welcome asset :)


Quick try:

test=*# select * from ((select id , foodate-'2008-07-06'::date as
difference from foo where foodate > '2008-07-06'::date limit 1) union
(select id, '2008-07-06'::date-foodate from foo where foodate <
'2008-07-06' limit 1)) bar order by 2 asc limit 1;
id | difference
----+------------
1 | 1
(1 row)

test=*# select * from ((select id , foodate-'2008-07-05'::date as
difference from foo where foodate > '2008-07-05'::date limit 1) union
(select id, '2008-07-05'::date-foodate from foo where foodate <
'2008-07-05' limit 1)) bar order by 2 asc limit 1;
id | difference
----+------------
2 | 1
(1 row)


Regards, Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [pgsql-de-allgemein] Artikel im Linux Technical Review: Datenbanken

am Tue, dem 22.07.2008, um 10:13:19 +0200 mailte Markus Wanner folgendes:
> Hallo,
>
> in der kommenden Ausgabe des Linux Technical Reviews [1] mit dem Thema
> Datenbanken erscheinen mehrere Artikel von deutschen Postgres
> Entwicklern, unter anderem auch ein Ueberblick ueber Datenbank
> Replikation von mir. Das Heft ist ab Donnerstag, 24. erhaeltlich, soweit
> ich weiss aber nur auf Bestellung oder im Abo.
>
> Gruesse
>
> Markus Wanner
>
> [1]: Vorschau der kommenden LTR Ausgaben:
> http://www.linux-magazin.de/heft_abo/ausgaben/technical_review/vorschau_ltr_09_12

Psssst: da kommt auch ein Artikel bzgl. Performance-Analyse...

Die beiden Autoren dürften ebenfalls den Lesern hier bekannt sein.


Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[SQL] How to Select a Tupl by Nearest Date

Hello

Assume I have a table like
create table foo (
id serial,
date foodate,
primary key(id)
);

with 2 tupls
insert into foo(foodate) values('2008-07-07'); --id = 1
insert into foo(foodate) values('2008-07-04'); -- id = 2

What I need is to select the nearest tupl by a given date and I do not know how to do this.

Something like:
select id from foo where foo date = nearest('2008-07-06');
-> should return 1

select id from foo where foo date = nearest('2008-07-05');
-> should return 2

How can I do this? Note: I have a large Table (> 5'000'000 rows) so a good performing way would be a welcome asset :)

Thanks
Christian
--
GMX Kostenlose Spiele: Einfach online spielen und Spaß haben mit Pastry Passion!
http://games.entertainment.gmx.net/de/entertainment/games/free/puzzle/6169196

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[pgsql-de-allgemein] Artikel im Linux Technical Review: Datenbanken

Hallo,

in der kommenden Ausgabe des Linux Technical Reviews [1] mit dem Thema
Datenbanken erscheinen mehrere Artikel von deutschen Postgres
Entwicklern, unter anderem auch ein Ueberblick ueber Datenbank
Replikation von mir. Das Heft ist ab Donnerstag, 24. erhaeltlich, soweit
ich weiss aber nur auf Bestellung oder im Abo.

Gruesse

Markus Wanner

[1]: Vorschau der kommenden LTR Ausgaben:
http://www.linux-magazin.de/heft_abo/ausgaben/technical_review/vorschau_ltr_09_12

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[HACKERS] Postgres-R: tuple serialization

Hi,

yesterday, I promised to outline the requirements of Postgres-R for
tuple serialization, which we have been talking about before. There are
basically three types of how to serialize tuple changes, depending on
whether they originate from an INSERT, UPDATE or DELETE. For updates and
deletes, it saves the old pkey as well as the origin (a global
transaction id) of the tuple (required for consistent serialization on
remote nodes). For inserts and updates, all added or changed attributes
need to be serialized as well.

pkey+origin changes
INSERT - x
UPDATE x x
DELETE x -

Note, that the pkey attributes may never be null, so an isnull bit field
can be skipped for those attributes. For the insert case, all attributes
(including primary key attributes) are serialized. Updates require an
additional bit field (well, I'm using chars ATM) to store which
attributes have changed. Only those should be transferred.

I'm tempted to unify that, so that inserts are serialized as the
difference against the default vaules or NULL. That would make things
easier for Postgres-R. However, how about other uses of such a fast
tuple applicator? Does such a use case exist at all? I mean, for
parallelizing COPY FROM STDIN, one certainly doesn't want to serialize
all input tuples into that format before feeding multiple helper
backends. Instead, I'd recommend letting the helper backends do the
parsing and therefore parallelize that as well.

For other features, like parallel pg_dump or even parallel query
execution, this tuple serialization code doesn't help much, IMO. So I'm
thinking that optimizing it for Postgres-R's internal use is the best
way to go.

Comments? Opinions?

Regards

Markus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Drop Index and Create Index

On Tue, Jul 22, 2008 at 1:39 AM, Giovanni Nervi
<giovanni.nervi@gmail.com> wrote:
> Hi,
>
> I would like recreate all indexes on a database, the command reindex
> is exclusive lock on table so I prefer the method drop index and
> create index. Is there a script for extract the command for the create
> index? I see pg_dump but it extract all object definitions. I see the
> source of pgAdmin for extract the definition of index, so before to
> begin to develop of a shell script for extract the definition of all
> indexes, I ask to you if this script already exist.

start with "select * from pg_indexes" and go from there.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Less rows -> better performance?

[...]
> You can check this too:
> select relname, relpages, reltuples, relkind
> from pg_class
> where relkind in ('r', 'i')
> order by relpages desc limit 20;
>
> Will give you the top-20 tables and their sizes, 1 page is typically
> 8KB, so you can cross-check if relpages/reltuples is completly off,
> this is a good indicator for table/index bloat.
use this query :
select pg_size_pretty(pg_relation_size(oid)) as relation_size,relname,
relpages, reltuples, relkind
from pg_class
where relkind in ('r', 'i')
order by relpages desc limit 20;

output will be much more readeable
>
> Regards,
> Mario
Lukasz

--
Lukasz Filut
DBA - IT Group, WSB-TEB Corporation
Poznan - Poland


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance