Wednesday, August 13, 2008

Re: [GENERAL] Postgres eats all memory

In response to "Bartels, Eric" <e.bartels@customsoft.de>:

> Hi there,
>
> we are running a fresh Postgres 8.3 installation with a single
> database with about 80GB of data.
>
> After a while the whole system memory is eaten up and every
> operation becomes very slow. Shortly after a system reboot
> and even without sending queries against the database the
> whole system memory is consumed after some time.
>
> Are there any settings that need to be set to avoid this?
> Currently the default settings are used ...
>
> The system is a Suse Enterprise Linux (64bit).

Provide some snapshots of the top command.

Default settings for PostgreSQL will not use all system memory, they're
actually too memory conservative for most use.

You're missing a TON of details here. I recommend you tell the list
how _much_ memory your system as, in additional to providing your
postgresql.conf file and a top snapshot demonstrating the problem.

My suspicion is one or more of the following:
1) You don't have very much RAM in your system and you're overloading
it with connections or otherwise
2) You're running things other than PG on this system that are eating
RAM.
3) You're being fooled by the fact that Linux will use all the available
RAM all the time (which isn't particularly a bad thing)

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Postgres eats all memory

On 2008-08-13 10:06, Bartels, Eric wrote:

> After a while the whole system memory is eaten up and every
> operation becomes very slow.

Show us:

- output of "free" command, when server gets slow.

- output of "ps v --sort=-size | head -10"

- output of "ps auxww | grep postgres"

- in terminal start "top", write "fp", enter, "Fp", enter; copy us upper
half of your terminal screen.

- what options did you change in postgresql.conf?

Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
Winnie the Pooh

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[BUGS] BUG #4352: Service fails to start when moved from domain to workgroup

The following bug has been logged online:

Bug reference: 4352
Logged by: Bhaskar Sirohi
Email address: bhaskar.sirohi@druvaa.com
PostgreSQL version: 8.3
Operating system: Windows 2003 Server
Description: Service fails to start when moved from domain to
workgroup
Details:

It seems there is an issue with Postgres v8.3 that if you move the postgres
server from domain to workgroup the service fails to start with
logon-failure.

If you right click on postgres service, and check "log-on" in properties it
says "./postgres". Even resetting postgres service user password doesn't
have any impact. Finally if you change the "log-on" to "local system
account" it works well.

Please advise what could be done for the above case and whether it would be
okay to run the postgres service under the local windows account.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[pgadmin-hackers] SVN Commit by guillaume: r7403 - in trunk/www/locale: . fr_FR/LC_MESSAGES

Author: guillaume

Date: 2008-08-13 13:33:23 +0100 (Wed, 13 Aug 2008)

New Revision: 7403

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7403&view=rev

Log:
Update website french transaction (and the .pot file).

Modified:
trunk/www/locale/fr_FR/LC_MESSAGES/pgadmin3_website.mo
trunk/www/locale/fr_FR/LC_MESSAGES/pgadmin3_website.po
trunk/www/locale/pgadmin3_website.pot

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] SVN Commit by guillaume: r7402 - in trunk/www/locale: fr_FR/LC_MESSAGES zh_CN/LC_MESSAGES

Author: guillaume

Date: 2008-08-13 13:31:05 +0100 (Wed, 13 Aug 2008)

New Revision: 7402

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7402&view=rev

Log:
Automatic stringmerge using merge script.


Modified:
trunk/www/locale/fr_FR/LC_MESSAGES/pgadmin3_website.mo
trunk/www/locale/fr_FR/LC_MESSAGES/pgadmin3_website.po
trunk/www/locale/zh_CN/LC_MESSAGES/pgadmin3_website.mo
trunk/www/locale/zh_CN/LC_MESSAGES/pgadmin3_website.po

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[GENERAL] SQl Tutorial

Habari,

I am new to SQl and I am looking for as an sql tutorial, maybe with a
bias to Postgresql. I have been reading the Postgresql documentation
tutorial I am finding it okay but I would like something with excercises
at the end of a chapter, so that I can at least test myself ;). Does
anyboby know of a link with a good tutorial? Also any good titles would
be appreciated.

Thanks in advance...
kinuthiA
-
They call me an atheist but that's too narrow, it only defines what I do
NOT believe in, rather than what I believe in.
-- Isaac Asimov.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Alias for function return buffer in pl/pgsql?

>
>
> Bonus question - if I rewrite the first FOR loop as:
>
>
>
> RETURN QUERY SELECT connection_id, connection_type_id, connector_node_id,
> connector_node_type_id, connectee_node_id,
>
> connectee_node_type_id, current, timestamp, $2
> + 1 FROM connections
>
> WHERE connection_type_id = 1 AND connector_node_id =
> ANY($1);

you have to cast. This code works:


postgres=# create type xxtp as (a integer, b varchar);
CREATE TYPE
Time: 6,458 ms
postgres=# create table xx(a integer, b varchar);
CREATE TABLE
Time: 54,053 ms
postgres=# insert into xx select 1, 'hhh';
INSERT 0 1
Time: 5,993 ms
postgres=# insert into xx select 1, 'hhh';
INSERT 0 1
Time: 3,393 ms
postgres=# insert into xx select 1, 'hhh';
INSERT 0 1


>postgres=# create or replace function x() returns setof xxtp as $$begin return query select * from xx; return; end$$language plpgsql;
CREATE FUNCTION
Time: 4,392 ms
postgres=# select * from x();
a | b
---+-----
1 | hhh
1 | hhh
1 | hhh
(3 rows)
postgres=# create or replace function x() returns setof xxtp as
$$begin return query select 1,'kkk'; return; end$$language plpgsql;
CREATE FUNCTION
Time: 4,577 ms
postgres=# select * from x();
ERROR: structure of query does not match function result type
CONTEXT: PL/pgSQL function "x" line 1 at RETURN QUERY
postgres=# create or replace function x() returns setof xxtp as
$$begin return query select 1,'kkk'::varchar; return; end$$language
plpgsql;
CREATE FUNCTION
Time: 3,395 ms
postgres=# select * from x();
a | b
---+-----
1 | kkk
(1 row)

regards
Pavel Stehule
>
>
> I get "ERROR: structure of query does not match function result type", even
> though the type signatures of the returned columns match the
> "connection_generation" rowtype. I am pretty sure this could be resolved by
> casting the resulting columns to that row type, but I am lost as to how the
> syntax to do such a thing would look.
>
>
>
> Thanks in advance for the help, and keep up the great work. PG8.3 is an
> amazing piece of software and it blows me away how much more advanced it
> gets with every release.
>
>
>
> Bart Grantham
>
> VP of R&D
>
> Logicworks Inc. – Complex and Managed Hosting

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] stackbuilder - wizard: Store proxy settings on *nix

Log Message:
-----------
Store proxy settings on *nix

Modified Files:
--------------
wizard:
IntroductionPage.cpp (r1.13 -> r1.14)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/IntroductionPage.cpp.diff?r1=1.13&r2=1.14)
ProxyDialog.cpp (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/ProxyDialog.cpp.diff?r1=1.4&r2=1.5)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[HACKERS] Patch: propose to include 3 new functions into intarray and intagg

Hello.

Here are these functions with detailed documentation:
http://en.dklab.ru/lib/dklab_postgresql_patch/

intagg.int_array_append_aggregate(int[]): fast merge arrays into one large list
intarray._int_group_count_sort(int[], bool): frequency-based sorting
intarray.bidx(int[], int): binary search in a sorted array

Tested for about a year on a real PostgreSQL cluster (10 machines, Slony replication) under a heavy load (millions of requests).
No crash nor memory problem detected during a year, so I suppose these functions are well-tested.

What do you think about that?

Re: [GENERAL] Alias for function return buffer in pl/pgsql?

Hello

array_append is relative slow. You can use SRF function for someone (I
am not sure if it's your case, but maybe).
postgres=# create or replace function buida(m int) returns int[] as
$$declare r int[] = '{}'; begin for i in 1..m loop r := r || i; end
loop; return r; end $$ language plpgsql strict immutable;
CREATE FUNCTION

postgres=# SELECT array_upper(buida(10000),1);
array_upper
-------------
10000
(1 row)

Time: 324,388 ms
postgres=# create or replace function buida(m int) returns int[] as
$$begin return array(select * from _buida($1)); end $$ language
plpgsql strict immutable;
CREATE FUNCTION
postgres=# create or replace function _buida(m int) returns setof int
as $$begin for i in 1..m loop return next i; end loop; return; end $$
language plpgsql strict immutable;
CREATE FUNCTION
postgres=# SELECT array_upper(buida(10000),1);
array_upper
-------------
10000
(1 row)

Time: 24,191 ms


2008/8/13 Bart Grantham <bg@logicworks.net>:
> Hello all, long time no chit-chat on the PG mailing list. We're upgrading
> from 8.0.3 to 8.3 and found that some stored procedures utilizing int_agg
> that we had left over from 7.3 had terrible performance. No problem, using
> ANY() we're able to regain that performance, more or less, and at the same
> time greatly simplify our stored procedures. But things can never be fast
> enough, can they? So I have a question or two. Here's my function for
> reference:
>
>
>
> CREATE OR REPLACE FUNCTION bg_nodes2descendants(INT[], INT) RETURNS SETOF
> connection_generation AS
>
> '
>
>
>
> DECLARE
>
> _row connection_generation%ROWTYPE;
>
> _children INT[];
>
>
>
> BEGIN
>
>
>
> -- this is faster than constructing in the loop below
>
> --_children = array(SELECT connectee_node_id FROM connections WHERE
> connection_type_id = 1 AND connector_node_id = ANY($1));
>
>
>
> FOR _row IN
>
> SELECT connection_id, connection_type_id, connector_node_id,
> connector_node_type_id, connectee_node_id,
>
> connectee_node_type_id, current, timestamp, $2 + 1
>
> FROM connections WHERE connection_type_id = 1 AND connector_node_id
> = ANY($1)
>
> LOOP
>
> _children := _children || _row.connectee_node_id;
>
> RETURN NEXT _row;
>
> END LOOP;
>
>
>
> IF FOUND THEN
>
> RETURN QUERY SELECT * FROM bg_nodes2descendants(_children, $2+1);
>
> END IF;
>
>
>
> RETURN;
>
> END
>
>
>
> ' LANGUAGE 'plpgsql';
>
>
>
> So, my concern is alluded to in the comment above. When I use this
> function in places where it returns large results, building the _children
> array directly (in the commented out line) is about 25% faster. But I'd
> like to avoid building the children array altogether and would instead like
> to generate that array from the already collected output rows. For example,
> right before the recursive call, I'd like to select a column of the buffered
> output rows, cast it to an integer[], and pass it into the recursive call.
> Is there an internal value I can access for this such as:
>
>
>
> _children := array(SELECT connectee_node_id FROM $output);
>
>
>
> Bonus question - if I rewrite the first FOR loop as:
>
>
>
> RETURN QUERY SELECT connection_id, connection_type_id, connector_node_id,
> connector_node_type_id, connectee_node_id,
>
> connectee_node_type_id, current, timestamp, $2
> + 1 FROM connections
>
> WHERE connection_type_id = 1 AND connector_node_id =
> ANY($1);
>
>
>
> I get "ERROR: structure of query does not match function result type", even
> though the type signatures of the returned columns match the
> "connection_generation" rowtype. I am pretty sure this could be resolved by
> casting the resulting columns to that row type, but I am lost as to how the
> syntax to do such a thing would look.

this syntax is correct, it's probably postgresql bug

regards
pavel stehule

>
>
>
> Thanks in advance for the help, and keep up the great work. PG8.3 is an
> amazing piece of software and it blows me away how much more advanced it
> gets with every release.
>
>
>
> Bart Grantham
>
> VP of R&D
>
> Logicworks Inc. – Complex and Managed Hosting

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Disk space occupied by a table in postgresql

On Sat, 2008-08-09 at 04:59 -0400, Fouad Zaryouh wrote:
> Hi Aravind,
>
> Run the following query
>
> SELECT relname, reltuples, relpages * 8 / 1024 AS "MB" FROM pg_class
> ORDER BY relpages DESC;
>
>
>
> relname = table name
> relpages = size in MB
> reltuples = number of rows.
>
> Hope this help.
>
>
>
> Fouad Zaryouh
>
> http://www.flipcore.com
>
>
>
>
> On Sat, Aug 9, 2008 at 3:18 AM, aravind chandu
> <avin_friends@yahoo.com> wrote:
> Hello,
>
> I installed postgresql on linux system, I
> create a table and inserted a large data into the table what I
> would like to know is how to calculate the disk space occupied
> by the table .Is there any procedure to find it out or simply
> a command .Please give me some suggestion.
>
>
> Thank You,
> Avin.
>
>
>
>

This may be of use in recent versions...
select pg_size_pretty(pg_database_size('table_name'))

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] C Extension woes

Tim Hawes wrote:
>
> text * pl_masterkey(PG_FUNCTION_ARGS)
> {
> char *e_var = getenv("PGMASTERKEY");
> size_t length = VARSIZE(e_var) - VARHDRSZ;
>
>

The VARSIZE macro is for variable length structures, like a text or
bytea which contains a length and data member. You are using this macro
on a regular C string "e_var". Try this instead:

size_t length = e_var != NULL ? strlen(e_var) : 0;

--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Replay attack of query cancel

Gregory Stark wrote:
> "Magnus Hagander" <magnus@hagander.net> writes:
>
>> Yeah, that's the point that will require a protocol bump, I think. Since
>> there is no response to the cancel packet, we can't even do things like
>> sending in a magic key and look at the response (which would be a rather
>> ugly hack, but doable if we had a success/fail response to the cancel
>> packet).
>
> From the server point of view we could accept either kind of cancel message
> for the first cancel message and set a variable saying which to expect from
> there forward. If the first cancel message is an old-style message then we
> always expect old-style messages. If it's a new-style message then we require
> new-style messages and keep track of the counter to require a monotically
> increasing counter.
>
> From the client point-of-view we have no way to know if the server is going to
> accept new-style cancel messages though. We could try sending the new-style
> message and see if we get an error (do we get an error if you send an invalid
> cancel message?).

No, that is the point I made above - we don't respond to the cancel
message *at all*.

> We could have the server indicate it's the new protocol by sending the initial
> cancel key twice. If the client sees more than one cancel key it automatically
> switches to new-style cancel messages.

That will still break things like JDBC I think - they only expect one
cancel message, and then move on to expect other things.

> Or we could just bump the protocol version.

Yeah, but that would kill backwards compatibility in that the new libpq
could no longer talk to old servers.

What would work is using a parameter field, per Stephen's suggestion
elsewhere in the thread. Older libpq versions should just ignore the
parameter if they don't know what it is. Question is, is that too ugly a
workaround, since we'll need to keep it around forever? (We have special
handling of a few other parameters already, so maybe not?)


//Magnus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] C Extension woes

Tim Hawes wrote:
> Hello all,
>
> I am trying to write an extension in C that returns a simple environment
> variable. The code compiles without any complaint or warning, and it
> loads fine into the database, however, when I run the function, I get
> disconnected from the server.
>
> Here is my C code:
>
> #include <postgres.h>
> #include <fmgr.h>
> PG_MODULE_MAGIC;
>
> #include <stdio.h>
> #include <stdlib.h>
>
> PG_FUNCTION_INFO_V1(pl_masterkey);
>
> text * pl_masterkey(PG_FUNCTION_ARGS)
> {
> char *e_var = getenv("PGMASTERKEY");
> size_t length = VARSIZE(e_var) - VARHDRSZ;
>
> text * mkey = (text *) palloc(length);
> VARATT_SIZEP(mkey) = length;
> memcpy(VARDATA(mkey), e_var, length);
>
> return mkey;
> }

Oh, you confused a lot of things.
You need something like

Datum pl_masterkey(PG_FUNCTION_ARGS) {
char *e_var = getenv("PGMASTERKEY");
PG_RETURN_TEXT_P(cstring_to_text(e_var));
}

You don't need to mess with anything varlena-related (liek VARSIZE),
it's all handled for you.
Also, read up on how to declare user-defined C functions in Postgres
(they always need to return Datum).

Cheers,
Jan

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] psql tutorial

On 12/08/2008 17:20, Porkodi Yesu wrote:

> Please get me PostgreSQL psql tutorial.

http://www.postgresql.org/docs/8.3/static/app-psql.html

Ray.

------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] stackbuilder - wizard: Cleanup allocated memory properly

Log Message:
-----------
Cleanup allocated memory properly

Modified Files:
--------------
wizard:
App.cpp (r1.24 -> r1.25)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/App.cpp.diff?r1=1.24&r2=1.25)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] stackbuilder - wizard: Store the download path for reuse on *nix

Log Message:
-----------
Store the download path for reuse on *nix

Modified Files:
--------------
wizard:
DownloadPage.cpp (r1.13 -> r1.14)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/DownloadPage.cpp.diff?r1=1.13&r2=1.14)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[HACKERS] C Extension woes

Hello all,

I am trying to write an extension in C that returns a simple environment
variable. The code compiles without any complaint or warning, and it
loads fine into the database, however, when I run the function, I get
disconnected from the server.

Here is my C code:

#include <postgres.h>
#include <fmgr.h>
PG_MODULE_MAGIC;

#include <stdio.h>
#include <stdlib.h>

PG_FUNCTION_INFO_V1(pl_masterkey);

text * pl_masterkey(PG_FUNCTION_ARGS)
{
char *e_var = getenv("PGMASTERKEY");
size_t length = VARSIZE(e_var) - VARHDRSZ;

text * mkey = (text *) palloc(length);
VARATT_SIZEP(mkey) = length;
memcpy(VARDATA(mkey), e_var, length);

return mkey;
}

And here is the SQL I use to create the function in PostgreSQL:

CREATE FUNCTION pl_masterkey() RETURNS text
AS 'pl_masterkey', 'pl_masterkey'
LANGUAGE C STRICT;

And the results:

select pl_masterkey();
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>

Thanks ahead of time for any and all help.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] stackbuilder - wizard: Fix minor layout issue

Log Message:
-----------
Fix minor layout issue

Modified Files:
--------------
wizard:
IntroductionPage.cpp (r1.12 -> r1.13)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/IntroductionPage.cpp.diff?r1=1.12&r2=1.13)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] stackbuilder - wizard: App version number management for *nix

Log Message:
-----------
App version number management for *nix

Modified Files:
--------------
wizard:
App.cpp (r1.23 -> r1.24)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/App.cpp.diff?r1=1.23&r2=1.24)
IntroductionPage.cpp (r1.11 -> r1.12)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/IntroductionPage.cpp.diff?r1=1.11&r2=1.12)
wizard/include:
Config.h (r1.3 -> r1.4)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/Config.h.diff?r1=1.3&r2=1.4)
StackBuilder.h (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/StackBuilder.h.diff?r1=1.5&r2=1.6)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Replay attack of query cancel

"Magnus Hagander" <magnus@hagander.net> writes:

> Yeah, that's the point that will require a protocol bump, I think. Since
> there is no response to the cancel packet, we can't even do things like
> sending in a magic key and look at the response (which would be a rather
> ugly hack, but doable if we had a success/fail response to the cancel
> packet).

From the server point of view we could accept either kind of cancel message
for the first cancel message and set a variable saying which to expect from
there forward. If the first cancel message is an old-style message then we
always expect old-style messages. If it's a new-style message then we require
new-style messages and keep track of the counter to require a monotically
increasing counter.

From the client point-of-view we have no way to know if the server is going to
accept new-style cancel messages though. We could try sending the new-style
message and see if we get an error (do we get an error if you send an invalid
cancel message?).

We could have the server indicate it's the new protocol by sending the initial
cancel key twice. If the client sees more than one cancel key it automatically
switches to new-style cancel messages.

Or we could just bump the protocol version.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] ROW constructor no es compatible con tipo RECORD

Me encontrado con la sorpresa que el tipo "row" (creado con el ROW constructor) no es compatible con el tipo RECORD ... alguna alternativa?
 
Atentamente,
 
RAUL DUQUE
Bogotá, Colombia

Re: [GENERAL] How to get many data at once?

Try to use

SELECT ARRAY(SELECT t_data FROM THETABLE WHERE t_ref_id = '1') AS v;

In PHP you may fetch all matched values as a single string and then - use explode() to split it into values (possibly with later stripslashes).
It is much faster than fetching a thousands of rows.


On Thu, Aug 7, 2008 at 3:03 PM, 窦德厚(ddh) <doudehou@gmail.com> wrote:
Hi, if I have such a table:

t_ref_id     t_data
--------------------
1             'abc'
2             '321'
1             'ddd'
2             'xyz'
9             '777'
...


I want to get data with a special t_ref_id:

SELECT t_data FROM THETABLE WHERE t_ref_id = '1';

I must use a while loop to extract the data (I'm using PHP):

$rows = array();
while (($row = pgsql_fetch_assoc($result) !== false) {
    $rows[] = $row;
}

And if there are many matched rows, such as many hundreds or thousands of rows, I think such a loop maybe inefficient.

How to do this in a more efficient way?

Thank you!



--
ddh


Re: [PERFORM] Filesystem benchmarking for pg 8.3.3 server

On Tue, 12 Aug 2008, Ron Mayer wrote:
> Really old software (notably 2.4 linux kernels) didn't send
> cache synchronizing commands for SCSI nor either ATA;

Surely not true. Write cache flushing has been a known problem in the
computer science world for several tens of years. The difference is that
in the past we only had a "flush everything" command whereas now we have a
"flush everything before the barrier before everything after the barrier"
command.

Matthew

--
"To err is human; to really louse things up requires root
privileges." -- Alexander Pope, slightly paraphrased

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Replay attack of query cancel

Tom Lane wrote:
>[ thinks for a bit... ] You could make it a change in the cancel
>protocol, which is to some extent independent of the main FE/BE
>protocol. The problem is: how can the client know whether it's okay to
>use this new protocol for cancel?

Two options:
a. Send two cancelkeys in rapid succession at session startup, whereas
the first one is 0 or something. The client can detect the first
"special" cancelkey and then knows that the connection supports
cancelmethod 2.
b. At sessionstartup, advertise a new runtimeparameter:
cancelmethod=plainkey,hmaccoded
which the client can then chose from.

I'd prefer b over a.
--
Sincerely,
Stephen R. van den Berg.

"And now for something *completely* different!"

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgadmin-hackers] pgScript patch

2008/8/12 Guillaume Lelarge <guillaume@lelarge.info>:
> I also prefer the DLL idea. Not sure I understand the interest of pgscript
> in pgAdmin, though...

Hi,
Do you have instructions or advice on doing so?
Mickael

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [HACKERS] SeqScan costs

On Tue, 2008-08-12 at 23:22 -0400, Tom Lane wrote:
> Gregory Stark <stark@enterprisedb.com> writes:
> >> On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
> >>> This is only going to matter for a table of 1 block (or at least very
> >>> few blocks), and for such a table it's highly likely that it's in RAM
> >>> anyway. So I'm unconvinced that the proposed change represents a
> >>> better model of reality.
>
> > I think the first block of a sequential scan is clearly a random access. If
> > that doesn't represent reality well then perhaps we need to tackle both
> > problems together.
>
> The point I was trying to make (evidently not too well) is that fooling
> around with fundamental aspects of the cost models is not something that
> should be done without any evidence. We've spent ten years getting the
> system to behave reasonably well with the current models, and it's quite
> possible that changing them to be "more accurate" according to a
> five-minute analysis is going to make things markedly worse overall.
>
> I'm not necessarily opposed to making this change --- it does sound
> kinda plausible --- but I want to see some hard evidence that it does
> more good than harm before we put it in.

psql -f seq.sql -v numblocks=2 -v pkval=Anything -v filler=Varies

When numblocks=2 I consistently see that an index scan is actually
faster than a seqscan, yet the planner chooses a seqscan in all cases.

This is true for any value of pkval and values of filler up to 4-500
bytes. We already take into account the length of rows because we
estimate the CPU costs per row not per block. That is not what I wish to
change.

This same situation occurs for all small tables. What I conclude is that
the "disk cost" swamps the CPU costs and so we end up with a seq scan
when we really want an index scan.

There are two ways of looking at this
* we work out a complex scheme for knowing when to remove disk costs
* we realise that the "disk cost" is actually the same on the *first*
block whether we are in memory or on disk.

If we take the second way, then we have a small but crucial correction
factor that produces better plans in most cases on small tables. Doing
it this way allows us to not worry about the cacheing, but just have a
scheme that balances the access costs better so that although they are
still present in the total cost the final plan choice is less dependent
upon the disk cost and more dependent upon the CPU costs.

This analysis is the result of experience, then measurement, not theory.
I've been looking for an easy and justifiable way to nudge the cost
factors so that they work better for small tables.

run_cost += random_page_cost + seq_page_cost * (baserel->pages - 1);


> > People lower random_page_cost because we're not doing a good job estimating
> > how much of a table is in cache.
>
> Agreed, the elephant in the room is that we lack enough data to model
> caching effects with any degree of realism.

I'm specifically talking about a proposal that works whether or not the
first block of the table is in cache, because I see a problem with small
table access.

I'm not suggesting that we model cacheing effects (though we may choose
to later). If you did, you might need to consider cross-statement
effects such as the likelihood that a UPDATE .. WHERE CURRENT OF CURSOR
is more likely to find the block in cache, or other effects such as
certain MFVs might actually be more likely to be in cache than non-MFVs
and so index scans against them are actually more preferable than it
might appear.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[DOCS] pg_standby supported versions

I noticed in the docs for pg_standby (docs/src/sgml/pgstandby.sgml) that
we have supported version >= 8.2. However it does seem to work ok with
earlier versions (e.g 8.1) - or am I missing something?

cheers

Mark

--
Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

Re: [pgadmin-support] Feature request: print function

On 13/08/2008 09:04, Dave Page wrote:
> On Tue, Aug 12, 2008 at 7:00 PM, Raymond O'Donnell <rod@iol.ie> wrote:
>> Any chance of File -> Print in the query editor? :-)
>
> Every time I've looked at printing in wxWidgets, I've run away
> screaming :-(. I'll put it on the list, but no promises...

Fair 'nuff. It's more a "would be nice" sort of thing, rather than a
deal-breaker. :-)

Ray.

------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [HACKERS] Transaction-controlled robustness for replication

Hi,

Robert Hodges wrote:
> Part of this is semantics—I like Simon's logical vs. physical
> terminology because it distinguishes neatly between replication that
> copies implementation down to OIDs etc. and replication that copies data
> content including schema changes but not implementation.

So far, these terms have mostly caused confusion for me: "logical
replication using WAL shipping", "physical replication, but logical
application"...

As Simon didn't explain in more details, what he has in mind, we all
have our own and quite different interpretations. These terms obviously
haven't helped to clarify the issue until now.

> It seems a
> noble goal get both to work well, as they are quite complementary.

Agreed.

> There are various ways to get information to recapitulate SQL, but
> piggy-backing off WAL record generation has a lot of advantages. You at
> least have the data structures and don't have to reverse-engineer log
> information on disk. Of the multiple ways to build capable logical
> replication solutions, this seems to involve the least effort.

We even have the real tuple, which is about the closest you can get to
being a "logical representation". Using that clearly requires less
efforts than converting a WAL record back to a logical tuple.

For example, it allows the optimization of sending only differences to
the old tuple for UPDATES, instead of always sending full tuples - see
Postgres-R for a partially working implementation.

> My company is currently heads down building a solution for Oracle based
> on reading REDO log files. It requires a master of Oracle dark arts to
> decode them and is also purely asynchronous.

That sounds pretty challenging. Good luck!

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] How to modify ENUM datatypes? (The solution)

About LGPL - I don't know.
But the license is not a problem, this code is totally freeware (because too simple).
LGPL is just my favorite license type for years. :-)

I'll change this if you prefer another license and explain, why (why BSD? BSD is the PostgreSQL license?)


On Wed, Aug 13, 2008 at 4:25 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Aug 12, 2008 at 5:40 PM, Dmitry Koterov <dmitry@koterov.ru> wrote:
> Here is the solution about "on the fly" ALTER ENUM:
> http://en.dklab.ru/lib/dklab_postgresql_enum/
>
> Usage:
>
> -- Add a new element to the ENUM "on the fly".
>
> SELECT enum.enum_add('my_enum', 'third');
>
> -- Remove an element from the ENUM "on the fly".
> SELECT enum.enum_del('my_enum', 'first');
>
> Possibly future versions of PostgreSQL will include built-in ALTER TYPE for
> ENUM, all the more its implementation is not impossible, as you see above.
> Hope this will be helpful.

Decent user space solution...it's easy enough.  IMO 'real' solution is
through alter type as you suggest.  It's worth noting there there is
no handling for the unlikely but still possible event of oid
wraparound.  Also, there is no 'enum_insert', which is not so pleasant
with how enums are implemented.

Also, is lgpl compatible with bsd licnese? Not that it matters, but I'm curious.

merlin

Re: [HACKERS] Replay attack of query cancel

Tom Lane wrote:
> Magnus Hagander <magnus@hagander.net> writes:
>> Andrew Gierth wrote:
>>> That's easily solved: when the client wants to do a cancel, have it
>>> send, in place of the actual cancel key, an integer N and the value
>>> HMAC(k,N) where k is the cancel key. Replay is prevented by requiring
>>> the value of N to be strictly greater than any previous value
>>> successfully used for this session. (Since we already have md5 code,
>>> HMAC-MD5 would be the obvious choice.)
>
>> I like this approach.
>
> It's not a bad idea, if we are willing to change the protocol.
>
>> If we don't touch the protocol version, we could in theory at least
>> backpatch this as a fix for those who are really concerned about this
>> issue.
>
> Huh? How can you argue this isn't a protocol change?

Um. By looking at it only from the backend perspective? *blush*


> [ thinks for a bit... ] You could make it a change in the cancel
> protocol, which is to some extent independent of the main FE/BE
> protocol. The problem is: how can the client know whether it's okay to
> use this new protocol for cancel?

Yeah, that's the point that will require a protocol bump, I think. Since
there is no response to the cancel packet, we can't even do things like
sending in a magic key and look at the response (which would be a rather
ugly hack, but doable if we had a success/fail response to the cancel
packet).

I guess bumping the protocol to 3.1 pretty much kills any chance for a
backpatch though :( Since a "new libpq" would no longer be able to talk
to an old server, if I remember the logic correctly?

//Magnus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Transaction-controlled robustness for replication

On Tue, 2008-08-12 at 13:33 -0400, Bruce Momjian wrote:
> Simon Riggs wrote:
> > > > > with values of:
> > > > >
> > > > > nothing: have network traffic send WAL as needed
> > > > > netflush: wait for flush of WAL network packets to slave
> > > > > process: wait for slave to process WAL traffic and
> > > > > optionally fsync
> > > >
> > > > Suggest
> > > > async
> > > > syncnet
> > > > syncdisk
> > >
> > > I think the first two are fine, but 'syncdisk' might be wrong if the slave
> > > has 'synchronous_commit = off'. Any ideas?
> >
> > Yes, synchronous_commit can be set in the postgresql.conf, but its great
> > advantage is it is a userset parameter.
> >
> > The main point of the post is that the parameter would be transaction
> > controlled, so *must* be set in the transaction and thus *must* be set
> > on the master. Otherwise the capability is not available in the way I am
> > describing.
>
> Oh, so synchronous_commit would not control WAL sync on the slave? What
> about our fsync parameter? Because the slave is read-only, I saw no
> disadvantage of setting synchronous_commit to off in postgresql.conf on
> the slave.

The setting of synchronous_commit will be important if the standby
becomes the primary. I can see many cases where we might want "syncnet"
mode (i.e. no fsync of WAL data to disk on standby) and yet want
synchronous_commit=on when it becomes primary.

So if we were to use same parameters it would be confusing.

> > synchronous_commit applies to transaction commits. The code path would
> > be completely different here, so having parameter passed as an info byte
> > from master will not cause code structure problems or performance
> > problems.
>
> OK, I was just trying to simplify it.

I understood why you've had those thoughts and commend the lateral
thinking. I just don't think that on this occasion we've discovered any
better ways of doing it.

> The big problem with an async
> slave is that not only would you have lost data in a failover, but the
> database might be inconsistent, like fsync = off, which is something I
> think we want to try to avoid, which is why I was suggesting
> synchronous_commit = off.
>
> Or were you thinking of always doing fsync on the slave, no matter what.
> I am worried the slave might not be able to keep up (being
> single-threaded) and therefore we should allow a way to async commit on
> the slave.

Bit confused here. I've not said I want always async, neither have I
said I want always sync.

The main thing is we agree there will be 3 settings, including two
variants of synchronous replication one fairly safe and one ultra safe.

For the ultra safe mode we really need to see how synch replication will
work before we comment on where we might introduce fsyncs. I'm presuming
that incoming WAL will be written to WAL files (and optionally fsynced).
You might be talking about applying WAL records to the database and then
fsyncing them, but we do need to allow for crash recovery of the standby
server, so the data must be synced to WAL files before it is synced to
database.

> Certainly if the master is async sending the data, there is
> no need to do a synchronous_commit on the slave.

Agreed

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] temporary statistics option at initdb time

Tom Lane wrote:
> Decibel! <decibel@decibel.org> writes:
>> I disagree. While we don't guarantee stats are absolutely up-to-date,
>> or atomic I don't think that gives license for them to just magically
>> not exist sometimes.
>
>> Would it really be that hard to have the system copy the file out
>> before telling all the other backends of the change?
>
> Well, there is no (zero, zilch, nada) use-case for changing this setting
> on the fly. Why not make it a "frozen at postmaster start" GUC? Seems
> like that gets all the functionality needed and most of the ease of use.

Oh, there is a use-case. If you run your system and then only afterwards
realize the I/O from the stats file is high enough to be an issue, and
want to change it.

That said, I'm not sure the use-case is anywhere near common enough to
put a lot of code into it.

But I can certainly look at making it a startup GUC. As you say, that'll
solve *most* of the cases.

//Magnus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Transaction-controlled robustness for replication

Hi,

Alvaro Herrera wrote:
> Actually I think the idea here is to take certain WAL records, translate
> them into "portable" constructs, ship them,

At which point it clearly shouldn't be called a WAL shipping method.
What would it have to do with the WAL at all, then? Why translate from
WAL records at all, better use the real tuples right away. (Almost
needless to say that here, but obviously Postgres-R does it that way).

So far, Simon really seems to mean WAL shipping: "it allows WAL to be
used as the replication transport", see [1].

Regards

Markus Wanner

[1]: mail to -hackers from Simon, Subject: "Plans for 8.4":
http://archives.postgresql.org/pgsql-hackers/2008-07/msg01010.php

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgadmin-support] Postgres Plus menu pick

On Tue, Aug 12, 2008 at 9:14 PM, Masis, Alexander (US SSA)
<alexander.masis@baesystems.com> wrote:
> Hello,
>
> I have installed Postgress plus via SSH connection using X server. The
> install was fine, however there is a manual here:
>
> http://www.enterprisedb.com/learning/tutorials/lininstall.do

This list is for pgAdmin support. Please post querles regarding
Postgres Plus to the EnterpriseDB support forums
(http://forums.enterprisedb.com).

Regards, Dave.

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

[pgadmin-hackers] SVN Commit by dpage: r7401 - trunk/www/development

Author: dpage

Date: 2008-08-13 09:07:12 +0100 (Wed, 13 Aug 2008)

New Revision: 7401

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7401&view=rev

Log:
Roadmap updatE

Modified:
trunk/www/development/roadmap.php

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[GENERAL] Postgres eats all memory

Hi there,

we are running a fresh Postgres 8.3 installation with a single
database with about 80GB of data.

After a while the whole system memory is eaten up and every
operation becomes very slow. Shortly after a system reboot
and even without sending queries against the database the
whole system memory is consumed after some time.

Are there any settings that need to be set to avoid this?
Currently the default settings are used ...

The system is a Suse Enterprise Linux (64bit).


Kind regards
Eric Bartels

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgadmin-support] Feature request: print function

On Tue, Aug 12, 2008 at 7:00 PM, Raymond O'Donnell <rod@iol.ie> wrote:
> Hi all,
>
> It'd be handy to be able to print the contents of the query editor - the
> SQL, I mean, not the results.
>
> Any chance of File -> Print in the query editor? :-)

Every time I've looked at printing in wxWidgets, I've run away
screaming :-(. I'll put it on the list, but no promises...

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [GENERAL] mac install question

On Thu, Jul 24, 2008 at 6:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Dave Page <dpage@pgadmin.org> writes:
>>> What are you using it for that you need it to be present at install
>>> time?
>
>> The linker hardcodes library paths into exes and libs. We examine
>> these paths at install time using otool and rewrite them from the
>> staging paths on the build machine to whatever directory the user
>> chose to install to using install_name_tool(1).
>
>> The other option would be to rewrite the paths to be relative at build
>> time I guess.
>
> Relative paths sound like the best solution to me, assuming they work.

For info, the latest version of the installer (download from
http://www.enterprisedb.com/products/pgdownload.do) fixes this
problem.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Transaction-controlled robustness for replication

Hi,

Robert Hodges wrote:
> I like Simon's logical vs. physical terminology

So far, it seems to mainly have caused confusion (physical replication,
but logical application? logical replication using WAL shipping?). At
least I still prefer the more meaningful and descriptive terms, like
"log shipping", "statement based replication" or "row based replication".

But maybe, what Simon is about to propose just doesn't fit into any of
those categories. I have a similar problem with Postgres-R, which is
somewhere in between synchronous and asynchronous.

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgadmin-support] Newbie [CentOS 5.2] initdb

jcvlz wrote:
> On Tue, Aug 12, 2008 at 10:44 AM, Raymond O'Donnell <rod@iol.ie> wrote:
>> On 12/08/2008 15:11, Daneel wrote:
>>
>>> While going through
>>> http://wiki.postgresql.org/wiki/Detailed_installation_guides
>>> and typing
>>> service postgresql start
>>> as root I got
>>> "/var/lib/pgsql/data is missing. Use "service postgresql initdb" to
>>> initialize the cluster first."
>> You should re-post to the pgsql-general mailing list, as you're more likely
>> to get an answer there; this one is for PgAdmin.
>>
>> Ray.
>>
>> ------------------------------------------------------------------
>> Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
>> rod@iol.ie
>> Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
>> ------------------------------------------------------------------
>>
>> --
>> Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgadmin-support
>>
>
>
> Daneel,
>
> I didn't see a post on another list, so I'm guessing you're still
> having problems. I'm not sure which tut you were using from the wiki
> you linked to, but section section 15.1 in the doc sums up
> initialization, and the rest of chapter 15 explains what each command
> is doing.
>
> My guess is that you still need to create the default postgres DB.
>
> http://www.postgresql.org/docs/8.3/interactive/installation.html
>
> -jcvlz
>
Thank you both, I've post it on pgsql.general, any tips on the topic
are welcome. I use 8.3.1 installed from RPM packages (only 3 necessary
used: postgresql-libs, postgresql and postgresql-server).

Daneel

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [GENERAL] Newbie [CentOS 5.2] service postgresql initdb

Daneel wrote:
> While going through
> http://wiki.postgresql.org/wiki/Detailed_installation_guides
> and typing
> service postgresql start
> as root I got
> "/var/lib/pgsql/data is missing. Use "service postgresql initdb" to
> initialize the cluster first."
>
> When I run
> service postgresql initdb
> I get
> "se: [FAILED]".
> However, /var/lib/pqsql/data is created and user postgres owns it.
>
> But then I run
> service postgresql start
> and the very same error occurs..
>
> Daneel

Shoud add that version is 8.3.1 and I've installed it using RPM
packages... Thanks in advance for any tip...

Daneel

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general