Friday, June 6, 2008

[ODBC] TIMEOUT attributes

Hi,

 

Does the ODBC driver for postgresql support the following timeout attributes (one way or around)?

 

  1. SQL_ATTR_CONNECTION_TIMEOUT
  2. SQL_ATTR_QUERY_TIMEOUT

 

With regards,

Vivek Gupta

[GENERAL] IN vs EXISTS

Hi all,

I have been using IN clause almost exclusively until recently I tried
to use EXISTS and gained significant performance increase without
changing/creating any indexes:

SELECT ... FROM a WHERE a.ref IN (SELECT b.id WHERE ...)

vs

SELECT ... FROM a WHERE EXISTS (SELECT 1 FROM b WHERE a.ref=b.id ...)

Performance is at least few times better when EXISTS is used. Is it
just PostgreSQL specific?

Cheers

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] por que SEQ SCAN Y NO INDEX SCAN

On Fri, Jun 6, 2008 at 4:39 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> MIGUEL CANCHAS escribió:
>
>> ***********ESTE ES EL EXPLAIN ANALYZE **********************************+
>>
>> "Hash Left Join (cost=61680.50..68144.24 rows=3 width=122) (actual
>> time=32106.612..32344.086 rows=11 loops=1)"
>
> Hmm, este problema parece que lo pasamos por alto ... ¿lo resolviste?


mmm... tenés razón (como escribirian los amigos argentinos)...

vejecutado_tejeduria es una vista, verdad?
esa es la parte que demora, devuelve 595637 filas... que pasa si
quiebras la consulta en 2 partes?

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

Re: [pgsql-advocacy] PostgreSQL derivatives

On Fri, Jun 6, 2008 at 8:14 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
> Well GridSQL isn't even an extension to PostgreSQL is it? It is written
> in java and appears to be a userland application (or middleware if you
> prefer). I could be wrong.

Correct, sort of. GridSQL is not part of the core server, it is a
separate Java-based distributed database which front-ends a cluster of
Postgres servers.

Originally, GridSQL was written to be database independent. However,
over the last year, it has been specialized for Postgres. Some of
these specializations include support of the Postgres wire-level
protocol, an understanding of Postgres SQL syntax, etc. Coming
versions of GridSQL will support more native Postgres features and may
benefit from some of our internal changes to the database core.

Regardless, it is open source and should always run against standard PostgreSQL.

--
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation | fax: 732.331.1301
499 Thornall Street, 2nd Floor | jonah.harris@enterprisedb.com
Edison, NJ 08837 | http://www.enterprisedb.com/

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [pgsql-advocacy] PostgreSQL derivatives

On Fri, 6 Jun 2008, Robert Treat wrote:

> But there isn't any floss software actually being shipped right? License/code
> wise truviso and netezza are basically the same in that respect, even if
> technologically truviso values postgres compatability higher, but there is no
> floss distributing occuring, right?

It's complicated and not something I'm at liberty to discuss. Suffice it
to say that there's a large list of FLOSS credits for the software stack
that Truviso provides to customers. But a look at the web site will note
the lack of the usual "Download" tab for the world at large, and the exact
details of how the FLOSS software distributed fits together isn't public
info yet at this point.

Tell you any more I'd have to kill you, etc.

> Oh, I understand that, but I even with the other ways, I don't know
> where Truvisio draws that line. Granted even if thier corporate policy
> is "never work on postgres during business hours"...

We're a startup--it's always business hours. The line you hypothesize
isn't really drawn too brightly. It may materialize further once I talk
with management about whether the days I spend at OSCON next month are
work or vacation time. If there's a BWPUG meeting next week (note subtle
hint that it's not clear yet whether there is or not) and I'm in town this
time I have a good story for you about that.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

[SQL] Need some magic with alternative rows

Hi,
I need some magic with alternative rows.

I've got a table with rows relating another table.
mytab (id, name, group_nr, class_nr, fk)

SELECT * FROM mytab
WHERE (fk = 994010)
OR ((class_fk = 40) AND (fk = 0))

994001, Tick, 1, 40, 994010
4001, Reporting, 1, 40, 0
4002, LEADS, 2, 40, 0
994003, Track, 3, 40, 994010
4003, Details, 3, 40, 0

The rows with fk=0 are defaults that should only be used when there is
no specific reference to the other table.

Is there a way to remove the default-rows if there exists a fk<>0 row
that has the same group_nr ?
I'd like to get just:

994001, Tick, 1, 40, 994010
4002, LEADS, 2, 40, 0
994003, Track, 3, 40, 994010


Regards
Andreas


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [GENERAL] strpos NOT doing what I'd expect

Ralph Smith <smithrn@washington.edu> writes:
> DECLARE
> achar character := '' ;

Use varchar or text. character is weird about trailing spaces.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Overhauling GUCS

"Joshua D. Drake" <jd@commandprompt.com> writes:
> Not surprising really. It is a simple adjustment to make and it also is
> easy to spot when its a problem. However it is not trivial to test for
> (in terms of time and effort). I know 10 is wrong and so do you.

Sure. But what is right? I'm afraid to just push it to (say) 100
because of the possibility of O(N^2) behavior in eqjoinsel. Somebody
needs to do some measurements on somewhat realistic scenarios.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] Ask for Help on Improving the FreeBSD FAQ

On Sat, 2008-06-07 at 03:46 +0200, Gabor PALI wrote:
> > Oh I apologize :). PeterE is pretty much lead doc dude around here:

> Yes, I would like to know more about:
> - how you do implement the user comments on your web pages,

Ahh yes this is not actually a doc question, so you are in the right
place. These are stored in a PostgreSQL database and sent through a
moderation queue. If the comment is appropriate we include the
information in the next version of the docs.

The website svn is available here:

https://pgweb.postgresql.org/

> - whether you do have any schedule for merging these comments,

Per each major release.

> - how you do build and mark up the different versions (e.g. static and
> interactive per versions) of the documentation,

That I can't answer.

> - whether you do have any project-specific modifications for DocBook

We have specific stylesheet changes yes, you can find them in our
src/doc if you download a tarball.

> - what other tools you do use for creating documentation

Just docbook and jade :) (and dsssl)

Sincerely,

Joshua D. Drake

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [GENERAL] PL/pgSQL graph enumeration function hangs

"Charles F. Munat" <chas@munat.com> writes:
> Using pseudocode from Celko's "SQL for Smarties" book, I wrote the
> following function that builds a path enumeration table. I hope to
> trigger this function on the rare occasions that the organizations table
> is updated. But when I run this function, it hangs.

I think there might be something wrong with this query:

> INSERT INTO organizations_path_enum
> SELECT o1.parent_id, r1.child_id, (o1.depth + 1)
> FROM organizations_path_enum o1, relationships r1
> -- advance existing paths by one level
> WHERE EXISTS (SELECT * FROM organizations_path_enum AS o2
> WHERE r1.parent_id = o2.child_id)
> -- insert only new rows into the table
> AND NOT EXISTS (SELECT * FROM organizations_path_enum AS o3
> WHERE o1.parent_id = o3.parent_id AND r1.child_id = o3.child_id);

I'm not totally clear on what this is supposed to accomplish, but
it seems like there should be some join clause between o1 and r1.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] Ask for Help on Improving the FreeBSD FAQ

> Oh I apologize :). PeterE is pretty much lead doc dude around here:

Thank you, I will also contact him, but I think he will got all these mails.


> But I might be able to help you as well. Do you have a specific are of
> interest besides the fact that we use Docbook?

Yes, I would like to know more about:
- how you do implement the user comments on your web pages,
- whether you do have any schedule for merging these comments,
- how you do build and mark up the different versions (e.g. static and
interactive per versions) of the documentation,
- whether you do have any project-specific modifications for DocBook
- what other tools you do use for creating documentation


Thank you for your answers in advance,
:g

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-www] Git access?

On Fri, 2008-06-06 at 13:54 +0200, Peter Eisentraut wrote:
> Am Donnerstag, 5. Juni 2008 schrieb David Fetter:
> > How do I get access to create git repositories on git.postgresql.org?

> The plan is to try out gitosis for account management. I have no experience
> with this, so we'd need to play around with it first.

Peter,

Just FYI if you find you don't like that direction, let me know and I
would gladly show you how I did it for pgweb with SVN... I believe it
would work out of the box for the git stuff.

Sincerely,

Joshua D. Drake


>


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-www] Ask for Help on Improving the FreeBSD FAQ

On Sat, 2008-06-07 at 03:35 +0200, Gabor PALI wrote:
> Hello Joshua,

> I got this advice from Bruce Momjian, because we are mainly interested
> in the implementation, not in the documentation itself :) As far as I
> know, this list is for developers, and -doc is for users. Do you know
> who is responsible for the implementation of the documentation?

Oh I apologize :). PeterE is pretty much lead doc dude around here:

Peter Eisentraut <peter_e@gmx.net>

But I might be able to help you as well. Do you have a specific are of
interest besides the fact that we use Docbook?

Sincerely,

Joshua D. Drake

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-es-ayuda] Pasar datos de una tabla a un archivo

frank wrote:
>
> Saludos lista,
>
>
>
> No se si a alguno le halla sucedido, pero tengo el siguiente dilema:
>
>
>
> Estoy desarrollando una aplicación que interactúa con Linux y más
> concretamente con el squid.
>
>
>
> Es para una empresa que tiene políticas de suspensiones temporales del
> servicio de Internet. El squid tiene un archivo de texto en el cual
> están todas las IPs que tienen permiso para navegar pero dicho archivo
> constantemente está editándose: añadiendo, modificando, borrando, etc.
>
>
>
> Mi intensión es que todos datos, incluyendo las IPs de los usuarios se
> guarden en la BD de Postgre. Luego hacer que mediante la web puedan
> modificar el archivo del squid de las IPs. Esto lo puedo hacer cuando
> son una o dos líneas las que tengo que añadir ya que ahí solo
> interviene la consulta obtenida de la base y el resto lo hace el Mono
> con el cual edito el archivo de Linux.
>
>
>
> Pero mi problema es cuando sean 100 o 200 líneas para borrar o para
> añadir. Actualmente lo hago con un script en Linux que edita el
> archivo línea por línea. Pero esto no lo voy a hacer para eliminar 100
> líneas o para añadir 50.
>
>
>
> La solución que se me ocurre es volver a generar el archivo solo con
> las IPs válidas y por esto acudo a Uds. Alguno sabe si se puede pasar
> los datos de una tabla o consulta sql directamente hacia un archivo de
> texto ¿? O alguno tiene alguna otra idea ¿?
>
>
>
> Disculpen lo largo del cuento pero es por si alguno me puede orientar
> mejor.
>
>
>
Porque no accesar a la base de datos con un script de perl que forme el
archivo de texto que necesitas?

--
Saludos,

Julio Cesar Sánchez González.

--
Ahora me he convertido en la muerte, destructora de mundos.
Soy la Muerte que se lleva todo, la fuente de las cosas que vendran.

www.sistemasyconectividad.com.mx

http://darkavngr.blogspot.com/

--
TIP 8: explain analyze es tu amigo

Re: [pgsql-www] Ask for Help on Improving the FreeBSD FAQ

Hello Joshua,

Joshua D. Drake wrote:
>
> On Fri, 2008-06-06 at 19:02 +0200, Gabor PALI wrote:
>> Dear PostgreSQL Developers,
>
>> Thank you for your help in advance.
>
> I think you may want to be at pgsql-docs
>

I got this advice from Bruce Momjian, because we are mainly interested
in the implementation, not in the documentation itself :) As far as I
know, this list is for developers, and -doc is for users. Do you know
who is responsible for the implementation of the documentation?


Thank you,
:g


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Overhauling GUCS

On Fri, 2008-06-06 at 20:19 -0400, Tom Lane wrote:
> Robert Treat <xzilla@users.sourceforge.net> writes:

> Actually, the reason it's still 10 is that the effort expended to get it
> changed has been *ZERO*. I keep asking for someone to make some
> measurements, do some benchmarking, anything to make a plausible case
> for a specific higher value as being a reasonable place to set it.

> The silence has been deafening.

Not surprising really. It is a simple adjustment to make and it also is
easy to spot when its a problem. However it is not trivial to test for
(in terms of time and effort). I know 10 is wrong and so do you. If you
don't I am curious why I see so many posts from you saying, "Your
estimates are off, what is your default_statistics_target?" with yet
even more responses saying, "Uhh 10."


Sincerely,

Joshua D. Drake


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

On Fri, 6 Jun 2008, Tom Lane wrote:

> Well, you can't see the default or reset values in pg_settings, only the
> current value. However, I fail to see the use of either of those for
> a configure wizard.

I'm under the impression that the primary reason to put the default in
there is to make it easier for a file generator program to be decoupled a
bit from the internal representation. Regardless, these values should be
exposed for tool writers. If you build a prototype interface for an
interactive settings changing tool, you quickly discover that showing the
default, range, and recommended setting are all valuable things people
would like to see when deciding what the change a setting to. And there's
no reason accumulating all that info should be the responsibility of a
tool writer when it's easy to expose and keep up to date inside the
database itself.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] strpos NOT doing what I'd expect

CODE:
===============================
CREATE OR REPLACE FUNCTION find_next_delim(invar varchar, delimlist varchar) RETURNS integer AS
$$

/*   OVERLOADED Function.  The other version takes a 3rd parameter as the
                starting position in invar.
*/

DECLARE
  
  achar  character := '' ;
  j      int       := 0  ;
  
BEGIN

  IF length(delimlist) = 0 THEN
    RAISE NOTICE 'In function \'find_next_delim\' the delimiter cannot be null.' ;
  END IF ;

  
  FOR i IN 1 .. length(invar) 
  LOOP

    j := j + 1 ;
    achar := substring(invar from i for 1 ) ;
    RAISE NOTICE 'achar is R%S',achar ;
    IF strpos(delimlist,achar) <> 0 THEN
      RETURN j ;
    END IF ;
    
  END LOOP ;
  
  RETURN 0 ;
  
END ;
$$ LANGUAGE plpgsql ;  /*   find_next_delim   */




WHAT'S HAPPENING:
===============================
airburst=# select find_next_delim('ralph smith','3') ;

NOTICE:  achar is RrS
NOTICE:  achar is RaS
NOTICE:  achar is RlS
NOTICE:  achar is RpS
NOTICE:  achar is RhS
NOTICE:  achar is R S
 find_next_delim 
-----------------
               6
(1 row)


airburst=# select find_next_delim('ralph smith','') ;  -- for the heck of it, that's a null

NOTICE:  In function 'find_next_delim' the delimiter cannot be null.
NOTICE:  achar is RrS
NOTICE:  achar is RaS
NOTICE:  achar is RlS
NOTICE:  achar is RpS
NOTICE:  achar is RhS
NOTICE:  achar is R S
 find_next_delim 
-----------------
               6
(1 row)

WHY find a match on the space???

Thanks!

Re: [HACKERS] Overhauling GUCS

Greg Smith <gsmith@gregsmith.com> writes:
> On Fri, 6 Jun 2008, Gregory Stark wrote:
>> "Greg Smith" <gsmith@gregsmith.com> writes:
>>> 1) Is it worthwhile to expand the information stored in the GUC structure to
>>> make it better capable of supporting machine generation and to provide more
>>> information for tool authors via pg_settings? The exact fields that should or
>>> shouldn't be included remains controversial; consider "default value",
>>> "per-session/runtime/restart", and "enum lists" as the list of things that are
>>> most needed there.
>>
>> Isn't that a list of what's *already* there?

> I should have been clearer there. Some of the items suggested are already
> in the structure, but aren't visible via pg_settings.

Well, you can't see the default or reset values in pg_settings, only the
current value. However, I fail to see the use of either of those for
a configure wizard. It'll presumably be attached to a fresh connection
so the reset value is not different from the current; and any decent
wizard is going to know perfectly well what the defaults are.

> ... In others (like the
> suggestion to add a URL to the documentation) it is actually a new field
> being added as well as its corresponding entry in the settings view.

Offhand I would argue that we should choose the URLs in such a way that
they can be derived automatically if you know the PG version number and
GUC variable name. (We're almost there already, except that you have
to know which section of the chapter it's in.) So the need to have a
pg_settings column seems questionable --- seems like it would mostly
be clutter. Perhaps instead of a view column, it would be useful to
encapsulate the algorithm as a function? pg_variable_help_url(name)

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

Robert Treat <xzilla@users.sourceforge.net> writes:
> There is a saying, something like "The accumulation of annecdotes is not
> data". Well, we seem to have a high bar on what proof we need to actually
> change a default GUC settings. default_statistics_target is a prime example,
> where almost no one i know has ever recommended 10 as a default, or suggests
> setting it to 10 as an way to improve performance, but the effort to get it
> changed to something more reasonable has been monumental.

Actually, the reason it's still 10 is that the effort expended to get it
changed has been *ZERO*. I keep asking for someone to make some
measurements, do some benchmarking, anything to make a plausible case
for a specific higher value as being a reasonable place to set it.

The silence has been deafening.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-advocacy] PostgreSQL derivatives

On Fri, 2008-06-06 at 19:58 -0400, Robert Treat wrote:
> On Friday 06 June 2008 18:32:23 Jonah H. Harris wrote:
> > We make closed-source changes related to the Postgres core related to
> > performance and Oracle compatibility. But, while these changes are
> > fairly significant, they do not prevent us from laying over a newer
> > version of Postgres. Similarly, as you mentioned, GridSQL is fully
> > open source.
> >
>
> Note that GridSQL, while it works with postgres, was developed completely
> seperate and independently from postgresql.

Well GridSQL isn't even an extension to PostgreSQL is it? It is written
in java and appears to be a userland application (or middleware if you
prefer). I could be wrong.

Sincerely,

Joshua D. Drake

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [SQL] crosstab functions in postgres 8.1

"Chris Preston" <chris@thetrafalgartravel.com> writes:
> I am almost a newbie to postgres and only recently found out about one of
> the greatest function crosstab in postgres. The problem is that I am using
> 8.1 and when I run the function I am getting an error relating to
> $libdir/tablefunc.. someone mentioned that it might be because I am using
> ver 8.1 and not 8.3

Well, since you didn't supply the exact error message, this is just a
guess; but it sounds like you haven't got the contrib/tablefunc module
installed in your 8.1 database, or at least haven't got it installed
correctly. You need a tablefunc.so file that is compiled for 8.1
(copying the 8.3 version will almost certainly NOT work).

regards, tom lane

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[GENERAL] array column and b-tree index allowing only 8191 bytes

Hi all,

I'm checking out some features in pgsql and found out about an array
datatype. As I'm curious to find out how well it performs, I've created
a table that contains an integer[] column and a script to insert about
500K rows in it. The length for the integer[] column is random (can be
10, can be 5000, can be more than that), as are the values in it.

When trying to insert a row, I get the following error:

index row requires 9796 bytes, maximum size is 8191

If I understood correctly, this is a limit of the b-tree index. Usually
you'd want to use another type of index but, again if I understood
correctly, those are meant mostly for full-text indexing.

Remember, I'm doing this for recreational purposes (the array type is
there, wanted to check out it's performance so why not? :-) ).

So my questions are: is this at all possible? If so, is is possible to
increate that maximum size?

Regards,
Celso


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-advocacy] PostgreSQL derivatives

On Friday 06 June 2008 18:32:23 Jonah H. Harris wrote:
> On Fri, Jun 6, 2008 at 6:26 PM, Seth Grimes <grimes@altaplana.com> wrote:
> > My understanding is that the MPP vendors using PostgreSQL and Ingres are
> > not *significantly* altering the software other than to, essentially,
> > turn off reliance on indexes. They're then adding extensions that are
> > proprietary and not, except in the case of GridSQL, open source. Is my
> > understanding correct?
>
> We make closed-source changes related to the Postgres core related to
> performance and Oracle compatibility. But, while these changes are
> fairly significant, they do not prevent us from laying over a newer
> version of Postgres. Similarly, as you mentioned, GridSQL is fully
> open source.
>

Note that GridSQL, while it works with postgres, was developed completely
seperate and independently from postgresql.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [BUGS] BUG #4116: Cannot create tablespace: could not set permissions on directory

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Graham Leggett wrote:
>> This code makes the incorrect assumption that the platform will allow the
>> postgres user to set the permissions using chmod.

> Is this something we should do anything about?

IMHO, no. The reason for actually doing the chmod is to verify that we
own the directory. If it fails, we don't own the directory in any
meaningful sense. I think the complainant's real problem is that he's
misconfigured his SELinux permissions.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] Overhauling GUCS

On Sat, 2008-06-07 at 01:30 +0200, Andreas Pflug wrote:
> Gregory Stark wrote:
> > "Andreas Pflug" <pgadmin@pse-consulting.de> writes:

> I think I made my point very clear when stating "not a file, but via
> SQL". Though I'm not a native English speaker, and I'm sure you
> understood. I must assume you're polluting this thread deliberately in
> order to sabotage the original intention of this thread. I find this
> disgusting.
>
> Ok, trying to contribute gui tools for pgsql is for masochists. We have
> vi, sed and grep, no more tools required, right?

Hold on guys, let's not get all out of whack. No one in their right mind
is going to suggest that vi, sed and grep are the only tools we need to
do this.

However a GUI wizard isn't really a way to go either. SQL? Yes it would
be very nice to be able to:

BEGIN;
SET shared_buffers to 65536 COMMENT IS 'Since I have 4G of ram I want
512 megs of shared buffers';
SET effective_cache_size to 262144 COMMENT IS 'Since I have 4G of ram I
want 1 gig of effective_cache';
COMMIT;

When that is done, it would write out the postgresql.conf.

Regardless of all of this, this thread is way out of hand. We went from
overhauling the configuration file to wizards, sample postgresql.conf
files and possibly even half of us migrating to MySQL :P.

Can we refine the topic a bit?

Joshua D. Drake


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

Gregory Stark wrote:
> "Andreas Pflug" <pgadmin@pse-consulting.de> writes:
>
>> I personally wouldn't even think about starting such a wizard, unless I have an
>> idea how to push the result into the database. No, not a file, but via SQL! So
>> your statement you won't react unless a wizard is almost ready is prohibitive,
>> apart from the fact that not only wizards (featuring AI) are interesting, but
>> simple config tools as well.
>
> Well there's a perfectly good place to start today. Dump out a config file

I think I made my point very clear when stating "not a file, but via
SQL". Though I'm not a native English speaker, and I'm sure you
understood. I must assume you're polluting this thread deliberately in
order to sabotage the original intention of this thread. I find this
disgusting.

Ok, trying to contribute gui tools for pgsql is for masochists. We have
vi, sed and grep, no more tools required, right?

Regards,
Andreas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [COMMITTERS] pgsql: Translation updates.

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Tom Lane wrote:
>> Hmm, you're right, the es/postgres.po file in the pgtranslation
>> repository has an unexpanded $PostgreSQL$ tag. Which is unsurprising
>> since pgfoundry's CVS is probably not set to recognize that tag.

> Well, my intention was that $Id$ should be expanded in pgfoundry, and
> $PostgreSQL$ should be expanded when the file is imported from pgfoundry
> into the main CVS repo. I'm not sure why it hasn't.

Right, but the copy in pgfoundry has never been imported into the main
CVS repo ...

regards, tom lane

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[PATCHES] Patch for dependency traversal during DROP

The attached patch rewrites DROP recursion according to my sketch here:
http://archives.postgresql.org/pgsql-hackers/2008-05/msg00301.php

It gets rid of a lot of unwanted NOTICE messages during drop, specifically
by ensuring that AUTO/INTERNAL drops are silent even when they cascade
from other cascaded objects. (In an unexpected side benefit, the messages
seem to come out in a more logical order, too.) See the regression test
diffs for examples. Also, it takes locks on objects before searching for
their dependencies, which fixes all of the concurrent-deletion problem
cases that I linked to in the above message.

I have not done anything about folding cascaded-drop NOTICEs into a
single long message, as was discussed a couple days ago. That would be
easy to do from here, but it seems like it should be a separate patch.

I was afraid while writing the patch that it might be too slow due to
reliance on simple linear list searching in ObjectAddresses lists ---
that means that deleting N objects is O(N^2), albeit with a constant
factor that's pretty tiny compared to the catalog-update work involved.
While this could be fixed by replacing ObjectAddresses with a more complex
data structure, I didn't want to have to do that in a first-generation
patch either. So I was pleased to find out that it actually seems to
be faster than CVS HEAD. I tested by timing "DROP SCHEMA public"
after running the regression tests. In CVS HEAD this involves dropping
1177 distinguishable objects (ie, there are that many entries in the
targetObjects list at completion of the scan phase). I get these timings
on a pretty-slow HPPA machine:

CVS HEAD:

$ time psql -c 'drop schema public restrict' regression 2>/dev/null

real 0m2.53s
user 0m0.04s
sys 0m0.03s
$ time psql -c 'drop schema public cascade' regression 2>/dev/null
DROP SCHEMA

real 0m8.06s
user 0m0.05s
sys 0m0.03s

With patch:

$ time psql -c 'drop schema public restrict' regression 2>/dev/null

real 0m0.74s
user 0m0.03s
sys 0m0.02s
$ time psql -c 'drop schema public cascade' regression 2>/dev/null
DROP SCHEMA

real 0m6.83s
user 0m0.03s
sys 0m0.02s

The speedup in RESTRICT timing was expected, but not so much CASCADE.
(BTW, I wonder why aren't the RESTRICT and CASCADE timings about the same
in CVS HEAD? The old implementation does all the same work in both cases,
and only fails at the end...)

It might be that with many thousand objects to be dropped, we'd
start to hit the O(N^2) behavior, but I suspect that CVS HEAD is
none too pleasant in such a case either. Anyone want to run some
experiments?

Another possible objection to this patch is that it takes an awful lot
of locks on things that we never locked before; a large DROP operation
could easily run out of locktable shared memory when it did not before.
That's fixable by increasing max_locks_per_transaction, but I wonder
whether there will be any push-back about it.

A couple of stylistic issues:

* After obtaining lock on an object-to-be-deleted, we need to check
whether it's already been deleted. This could have been done with
a pile of object-type-specific SearchSysCache tests, but I chose to
do it by checking to see if the pg_depend tuple we are traversing
has been marked dead; aside from being object-type-independent,
that should be a lot cheaper. This required sticking a little bit
of a wart into genam.c, since only at that level do we know which
buffer the tuple is in. This seemed cleaner than the alternative,
but I wonder if anyone has a better idea?

* The DROP code now requires more per-target-object state than just
the object's identity. However the ObjectAddresses struct it was
using is also shared with various dependency-adding functionality
that doesn't need any more than that. I chose to deal with that
by having the ObjectAddresses support routines handle ObjectAddresses
with or without a secondary "extra data" array. This is pretty ugly
IMHO, but duplicating most of that support code seemed no better.
Anybody have a strong aversion to that, or a better idea? (The
whole area might have to be revisited anyway, if we decide we need
something smarter than a linear list.)

If there are no objections I'll apply this in a day or two.

regards, tom lane

Re: [pgus-board] It's time for final review

On Fri, 2008-06-06 at 15:29 -0700, Selena Deckelmann wrote:

> On Fri, Jun 6, 2008 at 3:03 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
>
> > 1. No elections, only appointees -- would be a publicity nightmare but
> > it is certainly not unheard of and in fact has shown that it works very
> > well in certain circumstances.
>
> I think if we chose a different name - like "Pg'R'Us" (with the US not
> meaning United States), or "Cmte for the Advancement of PostgreSQL" we
> could do that.

Well I wasn't actually suggesting it but as a note, Core is not
elected :P

> > 2. Hold off elections of the new board until March and have that board
> > term be 2 years. That means that every other year we would potentially
> > replace 3 or 4 depending on the year.
>
> Apologies to you both, but I would like Portland to be the place that
> these elections take place. And I would like the fall conference to
> continue to be on the West coast.

No reason to apologize, its what has been discussed and agreed upon. Its
just an issue of resolving the problem at this point. The fall
conference, considering its called West will likely always be on the
west coast (unless we somehow re-define the term west) but that isn't
really relevant to this discussion :P

>
> > 3. Have elections twice a year once the founder terms are up. Which
> > means twice a year we could lose 3 or 4.
>
> I think this is excessive.

Right. Which is why I documented my perfect world.

Sincerely,

Joshua D. Drake

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

[BUGS] BUG #4226: PG_CTL fails to strip admin rights before running postgres

The following bug has been logged online:

Bug reference: 4226
Logged by: Austin Lee
Email address: Austin_Lee@rapid7.com
PostgreSQL version: 8.2.7
Operating system: Win2k3R2SP2
Description: PG_CTL fails to strip admin rights before running
postgres
Details:

I am trying to run pg_ctl.exe as admin when I get this error.
"FATAL: postgres: could not locate matching postgres executable."

I know there was a fix in 8.2.7 to fix this permission admin rights issue.
I've observed the fix in initdb.exe. In 8.2.1 initdb fails because it can't
find postgres. In 8.2.7 initdb success.

I assumed pg_ctl would have the same behaivor since I see the same calls to
AddUserDacl that I saw for initdb. However, I am still coming up error for
pg_ctl which I believe might be a windows permission issue.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[GENERAL] accessing table in LIFO order

I have got a table that holds 2 columns: date_id and info. date_id is bigint, created as, for eg. 17 May 2008, 19:05 hrs => 200805171905. This table is populated with {date, info} pair that I receive frequently. AND there can be mltiple such entries with a single date_id but varying info, for eg. 20 entries with same date_id but different info. I haven't created any primary key or index.

I use query that looks like this:
SELECT date_id, info FROM netproto_entries ORDER BY date_id DESC LIMIT <limit_val>;

Questions:
- For higher values of limit_val (say 70/75/..), when I change it, the order
  in which I get the entries changes. Eg. it shows me one info at the top at
  one time, and another on the next time with a different value of limit_val.

- My intent is to fetch the entries from the database in LIFO order, most
  recently entered entry at the top. For the elements with the date_id being
  the same, they are not in LIFO order. I understand that I can add another
  incrementing index column and sort the table based on that. But, without adding this,
  does postgresql have any method/configuration to get me output in LIFO order ?
  And are there any better methods available compared to adding another index
  column (as mentioned above) ?

Thanks,
Ethan

[GENERAL] accessing table in LIFO order

I have got a table that holds 2 columns: date_id and info. date_id is bigint, created as, for eg. 17 May 2008, 19:05 hrs => 200805171905. This table is populated with {date, info} pair that I receive frequently. AND there can be mltiple such entries with a single date_id but varying info, for eg. 20 entries with same date_id but different info. I haven't created any primary key or index.

I use query that looks like this:
SELECT date_id, info FROM netproto_entries ORDER BY date_id DESC LIMIT <limit_val>;

Questions:
- For higher values of limit_val (say 70/75/..), when I change it, the order
  in which I get the entries changes. Eg. it shows me one info at the top at
  one time, and another on the next time with a different value of limit_val.

- My intent is to fetch the entries from the database in LIFO order, most
  recently entered entry at the top. For the elements with the date_id being
  the same, they are not in LIFO order. I understand that I can add another
  incrementing index column and sort the table based on that. But, without adding this,
  does postgresql have any method/configuration to get me output in LIFO order ?
  And are there any better methods available compared to adding another index
  column (as mentioned above) ?

Thanks,
Ethan

[GENERAL] hopefully a brain teaser, can't quite figure out query

Hi guys, maybe this is just a teaser for me, but any help would be
awesome. My best crack at the solution is not returning yet after a
good ten minutes. I'll post the explain analyze if it ever comes
back. I have no indexing, which is probably embarrassing, I'm just
not quite sure what to index or really how.

So, I've got two tables, which I'm going to try to simplify and still
get across:

The small table is a listing of county fips codes, their name, and the
geometry for the county. Each fips is only listed once. The big table
is multiple emissions for each county, the parameter for the emission,
and the source code for the emission (scc). Each county in big tbale
has many entries, variable number of pollutant types, variable number
of scc's.

small table:
fips (int), name(string), geom(geometry)
123, "some county", "some geometry"
124, "some other county", "some other geometry"
etc.

big table:
fips (int), pollutant(string), value(double), scc(int)
123, "co", 1000, 1000
123, "co", 1500, 1000
123, "co", 500, 1001
123, "co", 550, 1001
123, "co", 1500, 1002
123, "co", 50, 1002

123, "so2", 1100, 1000
123, "so2", 510, 1001
123, "so2", 1510, 1002
etc.

So the user supplies the pollutant, a number of scc's, and a minimum
value.
I need to select all the fips from the small table, as well as the sum
of values in the big table (for that pollutant and scc codes) WHERE
the sum of the values in the big table (for that pollutant and those
scc codes) are larger than the given value.

So, for the above, say the user picked "CO" for pollutant, 1001 and
1002 for SCC.
the rows in big table with 1001 OR 1002 for scc AND "co" for pollutant
are:
123, "co", 500, 1001
123, "co", 550, 1001
123, "co", 1500, 1002
123, "co", 50, 1002

I would return the 123 fips ONLY if the value provided was less than
the sum of the values for all scc's (500+550+1500+50 = 2600), as well
as the sum for those values.

so, return set would be
fips, name, value
123, "some county", 2600

for each fips in small table...

Boy I wonder if that makes sense, maybe some pseudo SQL speak would be
better:

SELECT small.fips, small.name, sum(big.value)
FROM small, big
WHERE
small.fips in (
SELECT fips from big
WHERE ((pollutant='co') AND
(
(scc LIKE '21%') OR (scc LIKE '2301%') OR (scc LIKE '2801000000%')
)
HAVING SUM(value > 2000)
)
GROUP BY small.fips, small.name;

This is the query that isn't returning yet.
If anyone has any questions, comments, or any suggestions at all, I'll
do my best to respond ASAP.
Any help is extremely appreciated!
-Ed

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general