Monday, September 29, 2008

Re: [NOVICE] plpgsql functions vs. embedded queries

"Wright, George" <George.Wright@infimatic.com> writes:
>> Are funtions in plpgsql always slower than embedding sql queries in
>> string form in code like PHP?
> The functions took on average more than 10 times as long.

That suggests that you're getting a radically different, less efficient
plan for the "same" query inside a function. The exact reasons why are
hard to diagnose without a concrete example, but usually the story has
to do with comparing parameterized queries inside a function to
not-parameterized queries elsewhere. There are various workarounds
but the best choice depends on details you've not shown us.

regards, tom lane

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [GENERAL] pg_start_backup() takes too long

Simon Riggs wrote:
> > If it is a bug then I'd vote for just making it do an immediate
> > checkpoint --- that might cause big I/O load but it's hardly likely to
> > be worse than what will happen when you start taking the subsequent
> > filesystem backup.
>
> It was a clear intention for it to *not* cause a spike if we could avoid
> it. The idea was if you wanted it to happen quickly then you could do a
> checkpoint command first... oh well.
>
> People might want to I/O limit the backup also, which they can do
> without needing to let us know.
>
> I'm happy to put an option in for this, so we have another function:
> pg_start_backup(label text, immediate_chkpt boolean). I'll not be
> rushing to do this though given my current TODO.

I agree with Tom; either we make the pg_start_backup() checkpoint
immediate or leave the behavior unchanged.

Personally I think immediate makes more sense because issuing
pg_start_backup() seems like it should behave like a manual CHECKPOINT
command.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> Also, there was a time when you couldn't do vacuum full on system
> tables do to locking issues, and had to take the db down to single
> user mode to do so.

There was a short period when *concurrent* vacuum fulls on just the
wrong combinations of system catalogs could deadlock (because they both
needed to look up stuff in the other one). AFAIK we fixed that. It's
never been the case that it didn't work at all.

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[pgsql-jobs] Recruiting Consulting - Web 2.0 & Online Casual Games Specialist

Hi,

I am looking for top LAMP Developers & Architects (front-end/UI and back-end) for exciting online casual game companies and Web 2.0 companies located in SF & San Mateo. All companies offer top pay/benefits and opportunities.

If you would like to know more, please email me or call.

p.s. I bring lots of recruiting expertise having staffed Stormfront Studios award-winning Dev Team for 10+ years and most recently working with Technorati, as an in-house contract recruiter, in hiring for their dev team.

Best,

Marta Daglow
mdaglow@daglowconsulting.com

Cell 415-461-5845

--
Sent via pgsql-jobs mailing list (pgsql-jobs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jobs

[pgsql-es-ayuda] Encuesta para análisis para el proyecto de creación de una certificación para PostgreSQL

Anuncio original (en inglés):
http://archives.postgresql.org/pgsql-advocacy/2008-09/msg00059.php

--------

Hola,

La comunidad de PostgreSQL está trabajando para crear una
certificación de PostgreSQL manejada y respaldada por la comunidad.
Este esfuerzo está encabezado por el Proyecto de Certificación
PostgreSQL (http://www.postgresqlcertification.org). El objetivo
principal del proyecto es el desarrollo y apoyo de una serie de
certificaciones en niveles que proporcionen un método normalizado de
identificación de los conocimientos y aptitudes sobre bases de datos
de los profesionales que trabajan con PostgreSQL.

En este momento, el proyecto de certificación de PostgreSQL se
complace en anunciar la disponibilidad de una encuesta que ayudará a
conformar el contenido y la forma inicial en que se llevaran las
certificaciones. Esta encuesta está a disposición del público y se les
anima a todos a participar.

Para participar en la encuesta, por favor registrese en:
http://www.postgresqlcertification.org/jta

El proceso de registro tomá solo un par de minutos. Una vez que este
registrado y haya ingresado, haga clic en el menú JTA o dirigase al
enlace marcado como "participate in the survey"
(http://www.postgresqlcertification.org/job_task_analysis). La
encuesta toma unos 30 minutos.


Pedimos que por favor, se tomen el tiempo de completar la encuesta y
que refieran esta encuesta a otro empleador, administrador, colega, o
a cualquiera que depende de algún modo de una base de datos PostgreSQL
que tome tambien la encuesta. Con la ayuda de la comunidad podremos
llegar a una audiencia lo mas amplia posible.

Si usted está interesado en unirse al Proyecto de Certificación para
PostgreSQL, por favor visite
http://lists.postgresqlcertification.org/mailman/listinfo/cert/ y
suscribase a la lista de correos despues de tomar la encuesta.

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157
--
TIP 5: ¿Has leído nuestro extenso FAQ?
http://www.postgresql.org/docs/faqs.FAQ.html

[COMMITTERS] npgsql - Npgsql2: [npgsql-help][1004020] Backend sent unrecognized

Log Message:
-----------


[npgsql-help][1004020] Backend sent unrecognized response type: q

References for previous patch: http://pgfoundry.org/forum/message.php?msg_id=1004022.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlError.cs (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlError.cs.diff?r1=1.5&r2=1.6)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: Fixed error message: Backend sent unrecognized

Log Message:
-----------


Fixed error message: Backend sent unrecognized response type: q. There was an error on Error message handling where we were letting some fields without processing. Thanks Eric Montague (eric @nospam@ nuws.com) for heads up and tests.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlError.cs (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlError.cs.diff?r1=1.4&r2=1.5)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Updates of SE-PostgreSQL 8.4devel patches

KaiGai Kohei wrote:
> As I repeated several times, SE-PostgreSQL applies the seuciry policy
> of SELinux to achieve consistency in access controls. This feature
> enables to restrict client's privileges on accesses to database objects,
> as if it accesses to filesystem objects. Its background is our nightmare
> for web application flaws.
>
> The major purpose of this feature is to provide the most important
> component to run enterprise class web application with least privilege
> set which is consistent at whole of the system.

How important is this consistency goal in reality? We typically
recommend that database applications run entirely in the database, for
transaction integrity reasons and so on. Unless you are doing wild and
fun things with server-side copy or untrusted procedural languages,
there really shouldn't be that much use for consistency of access
control between PostgreSQL and something else. In fact, on top of the
transactional integrity criterion, having consistent access control is
one of the reasons to have all your production data in the database
system and nowhere else.

Of coure, this is an ideal state, and we all of to break that once in a
while. But hence the honest question, how often does that really happen
and to what extent, and does that justify the significant investment
that is being proposed here?

> In recent years, web application flaws are nightmare for us.
> The recent report said 95% of significant incidents on the first
> half of 2008, and 75% of them were SQL injection in Japan.
> My ultimate goal is to help the situation with mandatory access
> control and least priviled set for whole of LAPP stack.

As I had previously mentioned, we have to distinguish these two goals:
consistent access controls and mandatory access controls.

Then, how does MAC help with SQL injections? Using the existing
role-based system you can already define least-privilege users that are
essentially powerless even if SQL injections were to happen. I am not
aware that important flaws or gaps in our role-based access control
system have been pointed out that would make it impossible to create
applications with security levels similar to those achievable with a MAC
system.

Now, if you come back to your original goal of consistency in access
control, then it may in fact turn out that an
FLASK/SELinux/SE-PostgreSQL-based system is the best common ground for
such a system, but I am unconvinced that MAC by itself is necessary.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] database question

On Mon, 29 Sep 2008, john.crawford@sirsidynix.com wrote:

> What are these files and why have they suddenly started to be created
> and why so large?

They're the contents of the database and they get created every time there
is another 1GB worth of data in there. Note that the database will use
more space if data is being UPDATEd and you don't vacuum it regularly.
Without the vacuum going it's as if you'd added a new row instead when you
update something.

While it's possible to decode what those files are by using oid2name or
pg_class, what you probably want to know instead is what the big tables
and indexes in your database are to figure out what is gobbling space.
The script at http://wiki.postgresql.org/wiki/Disk_Usage will give you
that.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Sample databases

Tommy Gibbons wrote:

>I would like some pointers as to how to install the dbsamples so that I can use them in Postgres.  These .tar.qz files seem to contain *.sql files.  These seem to be text files but I do not understand how to > import to postgres or if there is some command line command to run.

> These samples are on http://www.postgresql.org/ftp/projects/pgFoundry/dbsamples/

 

You can use psql to import .sql files. Once your logged into psql you can \i <filename> to import the files. Have a look at the other options too with \?

You may wish to create a database before importing the files, see http://www.postgresql.org/docs/8.3/interactive/sql-createdatabase.html

 

David.

Re: [GENERAL] Sample databases

On 29/09/2008 23:03, Tommy Gibbons wrote:

> I would like some pointers as to how to install the dbsamples so that I
> can use them in Postgres. These .tar.qz files seem to contain *.sql
> files. These seem to be text files but I do not understand how to
> import to postgres or if there is some command line command to run.

You use psql to load them:

psql -U <user> -f <file> <database>


Ray.


------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Sample databases

Hi,
I would like some pointers as to how to install the dbsamples so that I can use them in Postgres.  These .tar.qz files seem to contain *.sql files.  These seem to be text files but I do not understand how to import to postgres or if there is some command line command to run.

These samples are on http://www.postgresql.org/ftp/projects/pgFoundry/dbsamples/

Tommy.

Re: [GENERAL] subquery in FROM must have an alias

Thanks to Stephan and Hubert for their replies. Using your answers I
was able to solve the problem. It turned out that its a natural join
that I wanted.

Thanks for quick help,
Ashutosh

On Sun, Sep 28, 2008 at 10:18, Stephan Szabo
<sszabo@megazone.bigpanda.com> wrote:
> On Sun, 28 Sep 2008, Ashutosh Chauhan wrote:
>
>> Hi all,
>>
>> This has been asked before and answered as well.
>> http://archives.postgresql.org/pgsql-sql/2007-12/msg00002.php but I
>> still cant figure out why postgres throws this error message even when
>> I have provided the aliases. My query:
>>
>> select a,b
>> from (billing.item JOIN (
>> select *
>> from ( billing.invoice JOIN billing.customer
>> on (id_customer_shipped = customer_uid and
>> address = 'pgh' ))
>> as temp2 ))
>> as temp;
>>
>> I have two from clauses so I have provided two corresponding alias
>> names for those two from clauses.
>
> If you break the above down a bit, you have:
>
> select a,b
> from
> (
> billing.item join
> (select * from
> (
> billing.invoice join
> billing.customer
> on (id_customer_shipped = customer_uid and address='pgh')
> )
> as temp2
> )
> )
> as temp;
>
> What the system is complaining about is the subselect (select * from ... )
> not having an alias. You've aliased the billing.invoice join
> billing.customer one and (billing.item join (...)) one, but not the
> subselect. In fact, I believe the two aliases you're using aren't strictly
> necessary. Also, the above appears to be missing the condition for the
> outermost join.
>
> Maybe something like the following will work with a filled in on
> condition:
>
> select a,b
> from
> (
> billing.item join
> (select * from
> (
> billing.invoice join
> billing.customer
> on (id_customer_shipped = customer_uid and address='pgh')
> )
> )
> as temp
> on (...)
> )
>
>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] PostgreSQL Cache

On Mon, 29 Sep 2008, Sam Mason wrote:

> On Mon, Sep 29, 2008 at 02:55:52AM -0400, Greg Smith wrote:
>> 3) sudo echo 3 > /proc/sys/vm/drop_caches
>
> I'm not sure about the rest, but shouldn't this be:
> echo 3 | sudo tee /proc/sys/vm/drop_caches

I couldn't think of any reason to actually include the tee in there and
just optimized displaying the "3" out as script noise.

> As an aside, it would be nicer if there was a more appropriately program
> than tee but I've yet to find one.

What are you trying to accomplish here that tee isn't quite right for?

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] pg_start_backup() takes too long

On Mon, 29 Sep 2008, Simon Riggs wrote:

> I'm surprised that checkpoint smoothing moves slowly even when it has so
> little to do. ISTM checkpoint completion target should set its write
> rate according to the thought that if shared_buffers were all dirty it
> would write them out in checkpoint_timeout *
> checkpoint_completion_target seconds. However, what it does is write
> them *all* out in that time, no matter how many dirty blocks there are.
> If there is just a few blocks to write, we take the *same* time to write
> them as if it were all dirty.

The checkpoint smoothing code that made it into 8.3 missed a couple of
nice to have features that just didn't make the schedule cut-off.
Enforcing a minimum rate was one, another was smoothing fsync calls.

Back when we were talking about the patch to sort writes at checkpoint
time, someone (I think you actually) commented that it might be worthwile
to create some sort of hook for making behavior of checkpoint-time dirty
buffer processing easy to change with a custom strategy. The sorted
behavior would then be the first such strategy available. Another one I
was thinking of was something that specified min+max write writes, which
would make this problem go away--might even auto-tune checkpoint_segments
or replace it altogether with an implementation based on those inputs.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [BUGS] ERROR: unexpected data beyond EOF in block XXXXX of relation "file"

David Fetter wrote:
> On Sun, Sep 28, 2008 at 11:51:49AM -0700, austijc wrote:
>> That's going to be a problem for the continued viability of
>> Postgres.
>
> Funny, I thought running a DBMS over a known-unreliable storage system
> was a problem for the continued viability of Oracle. When, not if,
> people lose enough data to this silliness, they'll be thinking hard
> about how to get Oracle out and something reliable in.

NFS is not "unreliable", it is just different in some respects from
other file systems. That paired with some poor NFS implementations in
certain operating systems and this evident general misunderstanding make
it a poor fit for PostgreSQL.


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] [HACKERS] PostgreSQL future ideas

2008/9/27 Douglas McNaught <doug@mcnaught.org>:
> On Sat, Sep 27, 2008 at 12:13 PM, Mark Mielke <mark@mark.mielke.cc> wrote:
>> If
>> some parts of PostgreSQL are not performance bottlenecks, and they are
>> extremely complicated to write in C, and very easy to write in something
>> else common and simple (I've never used LUA myself?), I imagine it would be
>> acceptable to the community.
>
> As long as they can expose their interfaces using the standard PG
> function call interface, and use the documented SPI mechanism to talk
> to the rest of the back end. Stuff that hooks into undocumented or
> unstable parts of the code would be much less viable.
>
> -Doug
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

I think that C is the best language for Postgresql. C++ has a little
thinks that make not good for performance.
Why people want to make more understable code touching the language?.
Simplify documentation for programmmers. Thats was the idea in the beggining.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

aca en bs as me fue imposible por razones que a esa hora estaba durmiendo....
jueguense! haganlo a las 13:00 hs UTC por lo menos! eran las 7 de la
madrugada aca... jjj
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

[pgeu-general] Fw: [Cert] Announcing PostgreSQL Certification Project Job Task Analysis Survey

An FYI for our friends across the pond!

Begin forwarded message:

Date: Mon, 29 Sep 2008 11:19:05 -0700
From: "Gregory S. Youngblood" <greg@tcscs.com>
To: <cert@lists.postgresqlcertification.org>
Subject: [Cert] Announcing PostgreSQL Certification Project Job Task
Analysis Survey


Hello!

The PostgreSQL Community is working to create a community driven and
endorsed PostgreSQL Certification. This effort is spearheaded by the
PostgreSQL Certification Project
(http://www.postgresqlcertification.org). The primary focus of the
project is the development and support of a series of tiered
certifications that provide a standardized method of identifying the
knowledge and skills of database professionals working with PostgreSQL.


At this time, the PostgreSQL Certification Project is pleased to
announce the availability of a Job Task Analysis survey that will help
shape the content and form the initial certifications will take. This
survey is publicly available and all are encouraged to participate.

To take the survey, please register at:

http://www.postgresqlcertification.org/jta


The registration process takes just a couple of minutes. Once
registration is complete and you are logged in, click JTA in the menu
followed by "participate in the survey"
(http://www.postgresqlcertification.org/job_task_analysis). The survey
takes approximately 30 minutes to complete.


We ask everyone to please take the time to complete the survey and to
refer an employer, manager, colleague, or anyone else that may rely on a
PostgreSQL database in some manner to take it as well. With the help of
the community we will be able to reach as wide an audience as possible.


If you are interested in joining the PostgreSQL Certification Project,
please visit
http://lists.postgresqlcertification.org/mailman/listinfo/cert/ and
subscribe to the mailing list after taking the survey.

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/

Fwd: [pgsql-es-ayuda] Tipo de dato para almacenar moneda

---------- Forwarded message ----------
From: el-PRiNCiPiTo <el-PRiNCiPiTo@terra.es>
Date: 2008/9/29
Subject: Re: [pgsql-es-ayuda] Tipo de dato para almacenar moneda
To:
Cc: pgsql-es-ayuda@postgresql.org


Utilizo Npgsql.

postgres Emanuel CALVO FRANCO escribió::
>
> Estas utilizando OBDC u OLEDB?
> --
> TIP 8: explain analyze es tu amigo
>
>

--
TIP 8: explain analyze es tu amigo

Postgresql Type NpgsqlDbType System.DbType Enum .Net System Type
(..)
money Money Decimal Decimal

pgsql soporta al money como decimal, fijate si en la conversion con el
lenguaje no estas casteando a integer o a numeric (n,0).
esto es de la documentacion del 2.0, no se que version estas
utilizando de los conectores.
http://npgsql.projects.postgresql.org/docs/manual/UserManual.html
--
TIP 9: visita nuestro canal de IRC #postgresql-es en irc.freenode.net

Re: [pgsql-es-ayuda] Sintaxis??: DROP TABLE IF EXISTS tablename

Fernando Flores Barraza escribió:

> Lo cual me arroja el siguiente error:
> *ERROR: syntax error at or near "EXISTS" at character 15
> LINE 1: DROP TABLE IF EXISTS /tablename/;*
>
>
> La expresión *IF EXISTS* es solo para evitar un retorno de error, y en
> el manual aparece como funcional. Alguien sabe que pasa ?

Estas leyendo un manual que no corresponde a tu version de Postgres. IF
EXISTS sólo funciona de 8.2 en adelante.

--
Alvaro Herrera http://www.amazon.com/gp/registry/3BP7BYG9PUGI8
"Las mujeres son como hondas: mientras más resistencia tienen,
más lejos puedes llegar con ellas" (Jonas Nightingale, Leap of Faith)
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

RE: [pgsql-es-ayuda] Sintaxis??: DROP TABLE IF EXISTS tablename

-----Mensaje original-----
De: Fernando Flores Barraza [mailto:fernafb@gmail.com]
Enviado el: Lunes, 29 de Septiembre de 2008 03:44 p.m.
Para: pgsql-es-ayuda@postgresql.org.
Asunto: [pgsql-es-ayuda] Sintaxis??: DROP TABLE IF EXISTS tablename


Hola, hace una semana que estoy suscrito a esta lista y es bastante útil
por su rapidez a las respuestas.

Vamos al punto. Tengo una consulta sobre esta sintaxis:
*DROP TABLE IF EXISTS /tablename/;*

Lo cual me arroja el siguiente error:
*ERROR: syntax error at or near "EXISTS" at character 15
LINE 1: DROP TABLE IF EXISTS /tablename/;*


La expresión *IF EXISTS* es solo para evitar un retorno de error, y en
el manual aparece como funcional. Alguien sabe que pasa ?

( Si esta pregunta ha salido antes disculpen la redundancia.. )

Gracias

Fernando Flores B.
Coquimbo/Santiago - Chile
--
TIP 4: No hagas 'kill -9' a postmaster

ESTA ES LA SINTAXIS

DROP TABLE IF EXISTS table1


mIGUEL cANCHAS

--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

[pgsql-es-ayuda] Sintaxis??: DROP TABLE IF EXISTS tablename

Hola, hace una semana que estoy suscrito a esta lista y es bastante útil
por su rapidez a las respuestas.

Vamos al punto. Tengo una consulta sobre esta sintaxis:
*DROP TABLE IF EXISTS /tablename/;*

Lo cual me arroja el siguiente error:
*ERROR: syntax error at or near "EXISTS" at character 15
LINE 1: DROP TABLE IF EXISTS /tablename/;*


La expresión *IF EXISTS* es solo para evitar un retorno de error, y en
el manual aparece como funcional. Alguien sabe que pasa ?

( Si esta pregunta ha salido antes disculpen la redundancia.. )

Gracias

Fernando Flores B.
Coquimbo/Santiago - Chile
--
TIP 4: No hagas 'kill -9' a postmaster

Re: [GENERAL] Counting unique rows as an aggregate.

On Mon, Sep 29, 2008 at 12:12 PM, r_musta <zepolen@gmail.com> wrote:
> However, this is starting to become too slow (as there are about 10 of
> these queries), and therefore I need to write an aggregate function
> which lets me do:
>
>>SELECT count_unique(make), count_unique(color) from table WHERE >criteria<;
>
> After reading about aggregate functions, this should be possible, as
> long as I can use a dictionary/hashmap type for the state<STYPE>
> argument.

This might be a nice fit for materialized views. While they're not
built in, PostgreSQL's extensibility allows you to build them prettily
easily.

http://jonathangardner.net/tech/w/PostgreSQL/Materialized_Views

Jonathan Gardner's web page on it is fantastic.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] database question

On Mon, Sep 29, 2008 at 8:21 AM, <john.crawford@sirsidynix.com> wrote:
> Hi all I have been experiencing some strange behaviour on my postgres
> DB. I am VERY new to PG so bear with me as what I am going to ask is
> all probably very basic to you guys.
> First off over the last couple of weeks we have been seeing in the
> dir /var/lib/pgsql/data/base/16450 some large file creations, so for
> example
>
> -rw------- 1 postgres postgres 1073741824 Sep 29 15:15 2683
> -rw------- 1 postgres root 1073741824 Sep 29 15:15 2613.77
> -rw------- 1 postgres root 1073741824 Sep 29 15:15 2613.83
>
> What are these files and why have they suddenly started to be created
> and why so large?

PostgreSQL automatically splits table files into 1G chunks so it can
run on OSes with file size limits. These are part of the table
identified by the oid 2613. You can find it by looking in pg_class.
Run psql -E and do \d and you'll see the queries that psql uses to
create its output, and you can muck about with them to see which are
which.

Also, the contrib module oid2name will tell you these things from the
shell / CLI.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [JDBC] [GENERAL] need help of getting PK after insertRow in JDBC

Good Afternoon

add the CLOSE_CURSORS_AT_COMMIT Option to the Statements capabilities e.g.
    Statement st = db.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE.ResultSet.CLOSE_CURSORS_AT_COMMIT);

//then tell the connection handle to commit the DML to the DB
db.commit();

Martin

Control your own destiny or someone else will -- Jack Welch
______________________________________________
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission.



Subject: [GENERAL] need help of getting PK after insertRow in JDBC
Date: Fri, 26 Sep 2008 10:14:01 +0800
From: Dongdong.Chen@ge.com
To: pgsql-general@postgresql.org; pgsql-jdbc@postgresql.org

Hi:
    I am a software engineer from GE. I am using JDBC to operate PostgreSQL8.3 in Ubuntu8.04. The develop environment is Eclipse3.2 My problem is:
    There is a PostgreSQL table XX containing 5 fields: AA, BB, CC, DD, EE, AA is primary key and auto-generated type, BB, CC, DD and EE is string type.
    I want to get the value of AA immediately after insert a row into the table. the code is like this:
 
    Statement st = db.creatStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
    ResultSet rs=st.executeQuery("SELECT * FROM XX");
    rs.moveToInsertRow();
    rs.updateString(BB, "b");
    rs.updateString(CC, "c");
    rs.updateString(DD, "d");
    rs.updateString(EE, "e");
    rs.insertRow();
    rs.moveToCurrentRow();
    int index = rs.getInt("AA");
    System.out.println(index);
 
   in the last sentence I always get 0 no matter how many records I insert. I tried other ways of moving the cursor including next(), last() and so on, and also cannot get the correct value. I tried the drivers both postgresql-8.3-603.jdbc3.jar and postgresql-8.3-603.jdbc4.jar.
 
    But when I use pdadminIII to check the table XX, the AA field is already auto-generated with the correct value.
 
    I found a way to solve this: close resultset and statement after moveToCurrentRow() and re-open them, and rs.last(), then run int index=rs.getInt("AA"), I can get the correct value. I think this method is sort of awkward, anyone knows a better way and the correct operations?
 
I am not sure it is proper to send this mail to this mail list. Sorry if bring you any inconvenience.
Thanks a lot!
 
Best Regards
 
Kevin Chen/ChenDongdong
+8613810644051
 
 


See how Windows connects the people, information, and fun that are part of your life. See Now

Re: [GENERAL] Re: Is there a parameter to tell postgresql to not attempt to open an IPv6 socket?

On Mon, Sep 29, 2008 at 7:41 AM, <yann.dubost@gmail.com> wrote:
> On 16 sep, 23:04, a...@commandprompt.com (Andrew Sullivan) wrote:
>> On Tue, Sep 16, 2008 at 04:46:37PM -0400, Reid.Thompson wrote:
>> > PostgreSQL appears to all intents to startup fine on the UNIX and IPv4
>> > sockets. Is there a parameter to tell postgresql tonotattempt to open
>> > anIPv6socket?
>>
>> Specify the specific TCP/IP interfaces in the postmaster.conf file.
>> Otherwise, Postgres will try tobindto all the sockets. There's
>> something hinkey about theIPv6support in AIX, IIRC, so that you end
>> up with this symptom.
>>
>
> Hello,
>
> I have the same pb. I have looked for a postmaster.conf file but there
> is none on the server.
> Apart from the doc and src files, the only files on my server
> containing the word "postmaster" are :
> /usr/local/pgsql/bin/postmaster
> $PGDATA/postmaster.opts
> $PGDATA/postmaster.pid
>
> Do you have an idea why and how then to solve this pb ?

Look for postgresql.conf

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] parallel pg_restore - WIP patch

Tom Lane wrote:
> Andrew Dunstan <andrew@dunslane.net> writes:
>> Tom Lane wrote:
>>> Um, FKs could conflict with each other too, so that by itself isn't
>>> gonna fix anything.
>
>> Good point. Looks like we'll need to make a list of "can't run in
>> parallel with" items as well as strict dependencies.
>
> Yeah, I was just thinking about that. The current archive format
> doesn't really carry enough information for this. I think there
> are two basic solutions we could adopt:
>
> * Extend the archive format to provide some indication that "restoring
> this object requires exclusive access to these dependencies".
>
> * Hardwire knowledge into pg_restore that certain types of objects
> require exclusive access to their dependencies.
>
> The former seems more flexible, as well as more in tune with the basic
> design assumption that pg_restore shouldn't have a lot of knowledge
> about individual archive object types. But it would mean that you
> couldn't use parallel restore with any pre-8.4 dumps. In the long run
> that's no big deal, but in the short run it's annoying.

hmm not sure how much of a problem that really is - we usually recommend
to use the pg_dump version of the target database anyway.


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] pg_upgrade performance test

I run performance test on in-place patch prototype which I sent for review and I
got nice result:

Original:
---------
MQThL (Maximum Qualified Throughput LIGHT): 2202.12 tpm
MQThM (Maximum Qualified Throughput MEDIUM): 4706.60 tpm
MQThH (Maximum Qualified Throughput HEAVY): 3956.64 tpm


TRANSACTION MIX

Total number of transactions = 330457
TYPE TX. COUNT MIX
---- --------- ---
Light: 55053 16.66%
Medium: 117665 35.61%
DSS: 36825 11.14%
Heavy: 98916 29.93%
Connection: 21998 6.66%


RESPONSE TIMES AVG. MAX. 90TH

Light 0.093 1.080 0.400
Medium 0.096 1.663 0.400
DSS 0.092 0.990 0.020
Heavy 0.095 1.262 2.000
Connections 0.092 1.030 0.400
Number of users = 60
Sum of Avg. RT * TPS for all Tx. Types = 17.154543

pg_upgrade:
-----------
MQThL (Maximum Qualified Throughput LIGHT): 2185.16 tpm
MQThM (Maximum Qualified Throughput MEDIUM): 4674.04 tpm
MQThH (Maximum Qualified Throughput HEAVY): 3915.28 tpm


TRANSACTION MIX

Total number of transactions = 327811
TYPE TX. COUNT MIX
---- --------- ---
Light: 54629 16.66%
Medium: 116851 35.65%
DSS: 36367 11.09%
Heavy: 97882 29.86%
Connection: 22082 6.74%


RESPONSE TIMES AVG. MAX. 90TH

Light 0.095 1.073 0.400
Medium 0.098 1.167 0.400
DSS 0.093 1.049 0.010
Heavy 0.098 1.173 2.000
Connections 0.097 1.115 0.400
Number of users = 60
Sum of Avg. RT * TPS for all Tx. Types = 17.385362

--------------------------

It looks like it has only 1% performance gap. And good to mention that it is not
optimized version. It would be good if somebody will run different performance
test on it and verify my results.

I used iGen OLTP test with 60 concurrent users and run it for 30minutes.


Zdenek


--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Can't cast from char to integer...

Hi all.

I'm trying to cut data from one table and put it in another table. The
problem comes from the fact that the first table has a field that is a
char(x) and the destination table needs that data in an integer.

For example, I'm trying to do this:

insert into data
select cast('666' as integer) as block_number, phone as phone_number, name
from demo_q;

The data table has a field called block_number that is an integer. I'm trying
to populate that field with the INTEGER, 666. (I use 666 for testing since I
would never assign that number/id to an actuall customer.)

When I run this query, I get:

ERROR: column "block_number" is of type integer but expression is of type
character varying

What am I doing wrong?

TIA,
--
Mike Diehl

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [lapug] Conference call re: LAPUG Meeting

On Mon, Sep 29, 2008 at 11:15 AM, Beth Kreitzer <bkreitzer@greenplum.com> wrote:

> Are you still available this morning to speak with us about the upcoming
> LAPUG meeting? I had this call scheduled for 11am this morning.
Sorry about that Beth. I spoke with Rui and touched bases:

To review the points. I would expect about 12 to 15 people to show
since there is a lot of interests in a presentation from green plum.

The meeting will be held at Cal Tech mid November. ( I will follow up
with more details at a later date ). Would it be possible to hold the
meeting on a Wednesday Evening rather than a Friday since many
LAPUGers are not available on Friday evenings? (LAPUGers: Is
Wednesday okay? Please let me know so that we can finalize a date.)

This is the email thread listing the points desired for discussion.
http://archives.postgresql.org/lapug/2008-07/msg00010.php


Also, I've cc'd the LAPUG users group so that they would weigh in on
any further points that they would like to have brought out.

--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via lapug mailing list (lapug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/lapug

[pdxpug] [JOB] Fwd: PostgreSQL DBA job - contract to hire

This just in. Contact me if you're interested.  Via a recruiter, and I don't know which company this actually is.

---------- Forwarded message ----------
Date: Tue, Sep 23, 2008 at 3:27 PM
Subject: PostgreSQL DBA job - contract to hire
To: selenamarie@gmail.com

 Here is a copy of the job description:

 

Job Description:

Title: PostgreSQL DBA

Duration: 6 Months - FTE

Location: Lake Oswego, OR

Minimum 5 years of progressive experience performing PostgreSQL administration production environment; experience should include three or more years of database administration in an enterprise level environment with very large (> 1 terabyte) databases

Responsibilities:

Collaborate with others to determine and deploy database solutions to business prioritized requirements, including business continuance and information security along with functional requirements

Install, configure, and test new PostgreSQL databases, perform version upgrades of existing databases

Install, upgrade and maintain all PostgreSQL-related server and administration software

Perform DB profiling and optimization; analyze and propose schema changes to PostgreSQL databases as needed

Proactively monitor activity, utilization, exceptions and database health; propose and deploy appropriate improvements; automate repetitive tasks

Collaborate with team members to create, publish and maintain database deployment methods and procedures for PostgreSQL databases

Collaborate with the Infrastructure Architect to create and maintain database technology roadmaps and deployment plans

Perform database backup and recovery duties; establish standards and schedules for database backups; develop and routinely test recovery procedures for each database; ensure that backup

schedules meet the recovery requirements and conforms to all database and data quality policies and standards

Work closely with project teams to ensure project success; coordinate product releases, data structure evolution, and manage and synchronize data promotions between development, test and production environments

Act as the final point of escalation and resolution for data related issues

Provide advanced technical support for database developers

Provide 24x7 database support as part of a rotation of DBAs

Design, implement and maintain archival and high availability strategies

Skills and abilities

Exceptional knowledge of the PostgreSQL database platform

Mastery of physical database design

Expertise in configuring, implementing and supporting high availability strategies with database systems including clustering, and replication

Mastery of ANSI SQL and/or PL-SQL

Experience in software development using Perl, Shell, or other scripting languages

Solution focused and methodical, careful attention to detail

Solid background using UNIX operating systems – Ubuntu Linux administration a plus

Experience with SQL Server and/or Oracle database platforms is desirable

Ability to perform capacity planning which supports the forecasting and purchasing processes

Ability to implement Database security safeguards and audit them

Consistently learning about trends and features in DB solutions, able to bring best practices to bear at solving DB and overall application problems

Ability to work in a high availability production environment where unplanned downtime is not acceptable

Ability to communicate ideas and designs to all stakeholders, including non-technical audiences


--
Selena Deckelmann
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

Re: [ADMIN] PID file

On Mon, Sep 29, 2008 at 12:43:54PM -0500, Ing. Jorge S Alanís Garza wrote:
> shutting down cleanly. Is there a way to recover the non-working postgres
> instance? Is this a very corruption-prone environment?

It's sure corruption-prone if you delete the pidfile.

If your iSCSI system keeps dropping out on you, then you need to fix
that. Otherwise, things are going to break in a way you'll be unhappy
with later.

A

--
Andrew Sullivan
ajs@commandprompt.com
+1 503 667 4564 x104
http://www.commandprompt.com/

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] inserting to a multi-table view

On Sun, Sep 28, 2008 at 9:57 PM, Seb <spluque@gmail.com> wrote:

>> Well, I was able to get PostgreSQL Update-able views to work nearly as
>> well as the update-able queries did in Access.
> Would you mind sharing a sample schema?

I'll see what I can do. I did post some sample schema a while back
but I can't seem of find them in the archive. I do have some
production tables and views, but there is too much non related
attributes to make a useful example.

>> As a side note, you'll notice that MS-Access will not allow
>> update-able queries based on ODBC linked table like it does on its
>> native tables for this reason.
>
> That's right, I did find that out once but didn't know whether it was
> due to ODBC limitations or something else.

MS-Access Implements Optimistic locking with all ODBC data sources.
The basic differences with an ODBC data source is that MS-Access's Jet
Engine can't put a file lock on it like it can with other file type db
like access, excel, flat files et.al.

Optimistic locking means that every time Access issues an update to
the ODBC server, it includes the all old values of a record (that
ms-access is aware of) in the update statement's where clause. So if
your MS-Access client was only aware of a stale version of the record,
its update count will be zero, thereby access with throw an exception
saying that the update could not be completed as the underlying table
was changed by another user.


>> Natural Primary key/Foreign key CASCADE UPDATEs don't work well with
>> update-able views. Choose one or the other.
>
> Not sure what you mean; can you please tell more about what doesn't work
> well with update-able views what the choice is?

This one is kind of hard to explain. Basically it deals with the
order of operation between Referential Integrity updates versus client
side or update-able view updates.

Lets say your client app or update-able view wants to update a single
row in a view. However, in addition to changing the fields from each
table, you also want to update the natural primary key. This sounds
simple but its not.

1) The record changes made on the client application are not instantly
committed and refreshed for each field change that the user makes.
Basically, the whole row is updated with an update statement once when
the user commits the change.

2) The view redirects the update statement to its underlying rules
(usually on rule for each joined table update). First of all the
primary table fields are changed (lets call it tableA) with the rule
update including its natural primary key. Lets say the primary key
was changed from 'OLDVALUE' to 'NEWVALUE' on tableA.

3) Now tableB that has a foreign key referencing tableA with its
foreign key set to ON UPDATE CASCADE. Declarative Referential
Integrity (DRI) such as ON UPDATE CASCADE have a much higher priority
over the PostgreSQL Rule system. So before the rule system can
perform updates on tableB, tableB has it foreign key changed from
'OLDVALUE' to 'NEWVALUE'.

4) Here is where the problem occurs. When the next update-able view
rule is executed to update tableB its where clause still thinks that
tableB foreign key is 'OLDVALUE'. And because 'OLDVALUE' is in the
where clause of the rule's update statement instead of 'NEWVALUE', no
record is found to match and so the remaining field updates fail.

So the end result is that all of tableA updates are successful,
TableB's foreign key is updated by DRI but the rest of the field
updates are not. So you are left with an inconsistent update from the
perspective of the view. By the way, this really confuses MS-Access.
It doesn't know what to do when this happens.

That's why I says that "Natural Primary key/Foreign key CASCADE
UPDATEs don't work well with update-able views."


Also, if you have concurrent users on the same updateable view, update
anomolies like this can still occur just from problems with user
concurrent updates. Thats the reason I decided to abandon join tabled
updateable views.


--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-advocacy] Announcing PostgreSQL Certification Project Job Task Analysis Survey

Hello!

 

The PostgreSQL Community is working to create a community driven and endorsed PostgreSQL Certification. This effort is spearheaded by the PostgreSQL Certification Project (http://www.postgresqlcertification.org). The primary focus of the project is the development and support of a series of tiered certifications that provide a standardized method of identifying the knowledge and skills of database professionals working with PostgreSQL.

 

At this time, the PostgreSQL Certification Project is pleased to announce the availability of a Job Task Analysis survey that will help shape the content and form the initial certifications will take. This survey is publicly available and all are encouraged to participate.

 

To take the survey, please register at:

http://www.postgresqlcertification.org/jta

 

The registration process takes just a couple of minutes. Once registration is complete and you are logged in, click JTA in the menu followed by “participate in the survey” (http://www.postgresqlcertification.org/job_task_analysis). The survey takes approximately 30 minutes to complete.

 

We ask everyone to please take the time to complete the survey and to refer an employer, manager, colleague, or anyone else that may rely on a PostgreSQL database in some manner to take it as well. With the help of the community we will be able to reach as wide an audience as possible.

 

If you are interested in joining the PostgreSQL Certification Project, please visit http://lists.postgresqlcertification.org/mailman/listinfo/cert/ and subscribe to the mailing list after taking the survey.

 

 

[GENERAL] Multiple querys

Hi everyone,

    I have multiple query´s, and i´m trying to optimize my queries by creating a temporary table x(contains the field id_product).

The queries that utilize the temp table x, must use the order of the rows at x.
My problem: I have a query that do a limit and offset on table x. I would like to this query obey the order of x.

I thought these solution: create a field order_id_product, to use order by order_id_product on other queries.

Can anyone help me?

Thanks a lot.

Re: [PERFORM] Identical DB's, different execution plans

Setting enable_nestloop = off did result in a hash join, so I also set enable_hashjoin = off.

The new plan from the slower db:
                                                                  QUERY PLAN                                                                  
-----------------------------------------------------------------------------------------------------------------------------------------------
 Merge Join  (cost=20195.54..46442.99 rows=7876 width=47) (actual time=136.531..478.708 rows=8437 loops=1)
   Merge Cond: ((t1.bn)::text = "inner"."?column3?")
   ->  Index Scan using t1_uc2 on t1  (cost=0.00..25604.74 rows=204906 width=32) (actual time=0.061..327.285 rows=8438 loops=1)
         Filter: active
   ->  Sort  (cost=20195.54..20222.79 rows=10898 width=22) (actual time=136.461..138.621 rows=12204 loops=1)
         Sort Key: (t2.sn)::text
         ->  Bitmap Heap Scan on t2  (cost=145.12..19464.74 rows=10898 width=22) (actual time=7.580..120.144 rows=12204 loops=1)
               Recheck Cond: (eff_dt = ('now'::text)::date)
               ->  Bitmap Index Scan on t2_nu1  (cost=0.00..142.40 rows=10898 width=0) (actual time=4.964..4.964 rows=24483 loops=1)
                     Index Cond: (eff_dt = ('now'::text)::date)
 Total runtime: 480.344 ms
(11 rows)

And the faster one:

                                                                         QUERY PLAN                                                                        
------------------------------------------------------------------------------------------------------------------------------------------------------------
 Merge Join  (cost=21371.63..21720.78 rows=7270 width=47) (actual time=60.435..80.604 rows=8437 loops=1)
   Merge Cond: ("outer"."?column6?" = "inner"."?column3?")
   ->  Sort  (cost=8988.56..9100.55 rows=44794 width=32) (actual time=30.498..33.093 rows=8438 loops=1)
         Sort Key: (t1.bn)::text
         ->  Seq Scan on t1  (cost=0.00..5528.00 rows=44794 width=32) (actual time=0.010..17.950 rows=8439 loops=1)
               Filter: active
   ->  Sort  (cost=12383.07..12409.32 rows=10500 width=22) (actual time=29.928..33.658 rows=12204 loops=1)
         Sort Key: (t2.sn)::text
         ->  Index Scan using t2_nu1 on t2  (cost=0.00..11681.77 rows=10500 width=22) (actual time=0.062..13.356 rows=12204 loops=1)
               Index Cond: (eff_dt = ('now'::text)::date)
 Total runtime: 83.054 ms
(11 rows)

And the query again:

explain analyze
select
    t1.bn,
    t2.mu,
    t1.nm,
    t1.root,
    t1.suffix,
    t1.type
from
     t1,
     t2
where
    t2.eff_dt = current_date
    and t1.active = true
    and t1.bn = t2.sn;

Thanks.

----- Original Message ----
From: Tom Lane <tgl@sss.pgh.pa.us>
To: Doug Eck <deck1@yahoo.com>
Cc: pgsql-performance@postgresql.org
Sent: Monday, September 29, 2008 11:42:01 AM
Subject: Re: [PERFORM] Identical DB's, different execution plans

Doug Eck <deck1@yahoo.com> writes:
> Any ideas as to what could the first db to opt for the slower subquery rather than the merge?

Not from the information given.  Presumably db1 thinks that the
mergejoin plan would be slower, but why it thinks that isn't clear yet.
Try setting enable_nestloop = off (and enable_hashjoin = off if it then
wants a hashjoin) and then post the EXPLAIN ANALYZE results.

            regards, tom lane

[GENERAL] Counting unique rows as an aggregate.

My current solution is to issue a bunch of queries:

> SELECT make, count(*) FROM table WHERE >criteria< GROUP BY make ORDER BY count(*) DESC LIMIT 3;
make count
----------+---------
audi | 50
bmw | 40
vw | 30

SELECT color, count(*) FROM table WHERE >criteria< GROUP BY color
ORDER BY count(*) DESC LIMIT 3;

color count
-----------+------
red | 400
blue | 200
green | 100

Which will give me the top 3 counts of each column im interested in
for the >criteria< specified.

However, this is starting to become too slow (as there are about 10 of
these queries), and therefore I need to write an aggregate function
which lets me do:

>SELECT count_unique(make), count_unique(color) from table WHERE >criteria<;

After reading about aggregate functions, this should be possible, as
long as I can use a dictionary/hashmap type for the state<STYPE>
argument.

Is there such a type in postgresql that can be used in an aggregate
function, and if there isn't, how would it be possible to make one?

Also, do you think I'm going about this the wrong way, and there is a
much better solution that's I've neglected?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [torontopug] Ontario Linux Fest

I think that's a great idea, Steve, thanks for asking them to add us.

I should be able to make the conference, are there any others out
there that might want to make it?

~Ian Bailey

On Fri, Sep 26, 2008 at 7:52 PM, Steve Singer <ssinger_pg@sympatico.ca> wrote:
>
> This year's Ontario Linux fest will be held on Saturday, 25 October 2008 at
> Days Hotel and Conference Centre - Toronto Airport East. The Ontario Linux
> Fest is a regional conference on Linux and other open source projects. You
> can find more at http://www.onlinux.ca/
>
> They have room in the schedule to host a birds of feather session for
> PostgreSQL users (I don't have the exact time yet).
>
> Unless there is an objection I'm going to ask them to list the BoF as being
> hosted by the TorontoPUG (This was sort of discussed at the august meeting).
> I am also going to hope that a few others make it to the conference.
>
> Steve
>
>
>
>
>
> --
> Sent via torontopug mailing list (torontopug@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/torontopug
>

--
Sent via torontopug mailing list (torontopug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/torontopug

Re: [NOVICE] plpgsql functions vs. embedded queries

I did neither but will try both, thank you.

The functions took on average more than 10 times as long.


-----Original Message-----
From: John DeSoi [mailto:desoi@pgedit.com]
Sent: Monday, September 29, 2008 1:16 PM
To: Wright, George
Cc: pgsql-novice@postgresql.org
Subject: Re: [NOVICE] plpgsql functions vs. embedded queries


On Sep 29, 2008, at 9:50 AM, Wright, George wrote:

> Are funtions in plpgsql always slower than embedding sql queries in
> string form in code like PHP?
> I ran several tests and they seemed to always perform that way. I
> would have thought the pre-planning would have made them faster.
> Would re-writing the functions C make them faster than the embedded
> queries?

I'd say no, they should not always be slower. Did you try preparing
your PHP call to the function? Did you mark your function as STABLE if
you are not modifying the database?


John DeSoi, Ph.D.

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

[ADMIN] PID file

Hello,

 

We are operating Postgresql 8.2 on an ISCSI environment. Sometimes there are issues with the ISCSI so Postgresql refuses to shutdown properly, or start, because of the pid file. My question is, what is the correct thing to do with this pid file? On some test-environments I have deleted the pid and then when starting up, I see postgres complain about the database not shutting down cleanly. Is there a way to recover the non-working postgres instance? Is this a very corruption-prone environment?

 

Thanks,

 

Jorge Santiago Alanís Garza
Innovación y Desarrollo
jorge.alanis@blocknetworks.com.mx

Tel: (81) 4444.4044
Cel: (811) 243-6570


www.blocknetworks.com.mx
Av. Lázaro Cárdenas 4000, L-17
Col. Valle de las Brisas
Monterrey, Nuevo León, CP 64790
Tel: +52 (81) 4444 4044 

 

[pgsql-www] pgsql-ports retirement party was [PORTS] libpq windows 64bit support

Is there a date for when pgsql-ports will be officially retired?

---------- Forwarded Message ----------

Subject: [PORTS] libpq windows 64bit support
Date: Monday 29 September 2008
From: "Pierre Joye" <pierre.php@gmail.com>
To: pgsql-ports@postgresql.org

hi,

Is there any plan to support windows 64bit? I tested the latest stable
release but it fails to build (I created my own configuration using
the existing sln).

Cheers,
--
Pierre

http://blog.thepimp.net | http://www.libgd.org

--
Sent via pgsql-ports mailing list (pgsql-ports@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-ports

-------------------------------------------------------

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

On Mon, Sep 29, 2008 at 11:12 AM, Steve Crawford
<scrawford@pinpointresearch.com> wrote:
>
>>> What it sounds like to me is that you're not vacuuming the system
>>> catalogs, which are getting bloated with dead rows about all those
>>> dropped tables.
>>>
>>
>> Wow, great!
>>
>> It is not immediately clear from the documentation, but the VACUUM
>> command also deals with the system catalogs as well, correct?
>>
>>
>
> To expand on Tom's answer, rows in system tables are created not only for
> tables but for each column in the table, rules, indexes, etc. You can end
> up with a lot more row creation than you suspect. And temporary tables bloat
> the system tables just like regular tables. We discovered that cron scripts
> using temporary tables can cause very rapid system-table blotage.

Also, there was a time when you couldn't do vacuum full on system
tables do to locking issues, and had to take the db down to single
user mode to do so.

Tom, is that still the case?

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [BUGS] ERROR: unexpected data beyond EOF in block XXXXX of relation "file"

Okay, I see the maturity level is too low here. I'll take this elsewhere.
If anyone has a similar problem and would like to know the status please
email me.

David Fetter wrote:
>
> On Sun, Sep 28, 2008 at 11:51:49AM -0700, austijc wrote:
>>
>> That's going to be a problem for the continued viability of
>> Postgres.
>
> Funny, I thought running a DBMS over a known-unreliable storage system
> was a problem for the continued viability of Oracle. When, not if,
> people lose enough data to this silliness, they'll be thinking hard
> about how to get Oracle out and something reliable in.
>
>> Clustered systems using a NAS for data is a pretty common
>> configuration these days. Oracle specifically supports it and even
>> complains if your NFS mount options are not correct. Our Oracle
>> DBs run great in this same configuration and are a good 10-20 times
>> faster than the local disk performance along with the quick
>> take-over capability if a system goes belly up.
>
> Oracle stores more state to the disk than PostgreSQL does, which has
> significant down sides. There are more effective ways of handling
> uptime requirements than jamming NFS into the picture. Maybe it's
> just my failure of imagination, but I can't think of a *less*
> effective one.
>
>> I'll try to isolate this problem with a simple C program to tell me
>> what software layer to look at. Hopefully it's just a configuration
>> issue.
>
> It's not. The issue is that NFS is broken garbage from a DBMS, and,
> it's pretty easy to argue, just about any other perspective.
>
> Cheers,
> David.
>
>>
>> Tom Lane-2 wrote:
>> >
>> > austijc <jaustin@jasononthe.net> writes:
>> >> The question is can anyone more familiar with this tell me what's
>> going
>> >> on
>> >> here? I don't know if this is a Postgres, Sun, or NetApp issue.
>> Could
>> >> it
>> >> be a work around for an old Linux bug causing an issue with acceptable
>> >> behavior of the NetApp device?
>> >
>> > People who try to run databases over NFS usually regret it eventually
>> ;-)
>> >
>> > All I can say is that this error message has never before been reported
>> > by anyone who wasn't exposed to that lseek-inconsistency kernel bug.
>> > I am not finding it too hard to believe that NFS might be vulnerable to
>> > similar misbehavior.
>> >
>> > regards, tom lane
>> >
>> > --
>> > Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
>> > To make changes to your subscription:
>> > http://www.postgresql.org/mailpref/pgsql-bugs
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/ERROR%3A--unexpected-data-beyond-EOF-in-block-XXXXX-of-relation-%22file%22-tp19680438p19713228.html
>> Sent from the PostgreSQL - bugs mailing list archive at Nabble.com.
>>
>>
>> --
>> Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-bugs
>
> --
> David Fetter <david@fetter.org> http://fetter.org/
> Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
> Skype: davidfetter XMPP: david.fetter@gmail.com
>
> Remember to vote!
> Consider donating to Postgres: http://www.postgresql.org/about/donate
>
> --
> Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-bugs
>
>

--
View this message in context: http://www.nabble.com/ERROR%3A--unexpected-data-beyond-EOF-in-block-XXXXX-of-relation-%22file%22-tp19680438p19728120.html
Sent from the PostgreSQL - bugs mailing list archive at Nabble.com.


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[ANNOUNCE] West: 2nd call for lightning talks

The PostgreSQL Conference: West is being held at Portland State
University on October 10th - 12th. The West Coast PostgreSQL
conference is currently seeking more Lightning Talks.

Lightning talks are an exciting way to get involved in the conference
with very little commitment on the speakers end. Assuming you can stand
in front of an audience for 5 minutes; you can speak about anything
PostgreSQL or Open Source related.

To submit your lightning talk please visit:

http://www.pgcon.us/west08/talk_submission/

If you have not yet registered for the event please visit:

http://www.postgresqlconference.org/west08/register

Lastly thank you to this years sponsors:

Command Prompt: http://www.commandprompt.com/
EnterpriseDB: http://www.enterprisedb.com/

Afilias : http://www.afilias.info/
HP: http://www.hp.com/

Emma : http://www.myemma.com/

Continuent : http://www.continuent.com/
Endpoint : http://www.endpoint.com/
OTG : http://www.otg-nc.com/

EFF: http://www.eff.org/
Google: http://www.google.com/

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/

---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

[GENERAL] database question

Hi all I have been experiencing some strange behaviour on my postgres
DB. I am VERY new to PG so bear with me as what I am going to ask is
all probably very basic to you guys.
First off over the last couple of weeks we have been seeing in the
dir /var/lib/pgsql/data/base/16450 some large file creations, so for
example

-rw------- 1 postgres postgres 1073741824 Sep 29 15:15 2683
-rw------- 1 postgres root 1073741824 Sep 29 15:15 2613.77
-rw------- 1 postgres root 1073741824 Sep 29 15:15 2613.83
-rw------- 1 postgres root 65347584 Sep 29 15:16 2613.88
-rw------- 1 postgres root 1073741824 Sep 29 15:16 2613.86
-rw------- 1 postgres root 1073741824 Sep 29 15:17 2613.82
-rw------- 1 postgres root 1073741824 Sep 29 15:17 2613.81
-rw------- 1 postgres postgres 380346368 Sep 29 15:17 16451.1
-rw------- 1 postgres postgres 217710592 Sep 29 15:18 33820
-rw------- 1 postgres root 119046144 Sep 29 15:18 2683.1
-rw------- 1 postgres root 1073741824 Sep 29 15:18 2613.84

What are these files and why have they suddenly started to be created
and why so large?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Re: Is there a parameter to tell postgresql to not attempt to open an IPv6 socket?

On 16 sep, 23:04, a...@commandprompt.com (Andrew Sullivan) wrote:
> On Tue, Sep 16, 2008 at 04:46:37PM -0400, Reid.Thompson wrote:
> > PostgreSQL appears to all intents to startup fine on the UNIX and IPv4
> > sockets.  Is there a parameter to tell postgresql tonotattempt to open
> > anIPv6socket?
>
> Specify the specific TCP/IP interfaces in the postmaster.conf file.
> Otherwise, Postgres will try tobindto all the sockets.  There's
> something hinkey about theIPv6support in AIX, IIRC, so that you end
> up with this symptom.  
>
> A
>
> --
> Andrew Sullivan
> a...@commandprompt.com
> +1 503 667 4564 x104http://www.commandprompt.com/
>
> --
> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general

Hello,

I have the same pb. I have looked for a postmaster.conf file but there
is none on the server.
Apart from the doc and src files, the only files on my server
containing the word "postmaster" are :
/usr/local/pgsql/bin/postmaster
$PGDATA/postmaster.opts
$PGDATA/postmaster.pid

Do you have an idea why and how then to solve this pb ?

Thanks,

Yann.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general