Thursday, September 25, 2008

Re: [GENERAL] problem with custom_variable_classes

Taras Kopets wrote:
> now you have to initialize this variable once per session before usage:
>
> SELECT set_config('olap.rownum_name', false);

Ok, thanks. That has fixed my problem.

Malcolm


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[ADMIN] postgres at reboot

Hi Everybody,

About 1.5 month ago, my machine (which runs redhat linux
2.6.9-78.0.1.ELsmp on Dell hardware with postgres 8.3.3)
had a terrible crash. I am mostly recovered, but there
is at least one more thing that's not right.

Namely, when the machine gets rebooted, postgres doesn't
start automatically. Before the crash, there was no such
problem.

I see a file /etc/rc.d/init.d/postgresql. Is this the
start-up script? If not, please tell me what would be the
correct one (and where would I find one). If it is, can you
please tell me where I would find the latest one anyway?
(Maybe the latest one got lost and I am looking at an old
copy???)

The detail of how it happens escapes me, but as near as I
can reconstruct in my head, after the reboot I discover that
postgres is not running and I issue:
pg_ctl start
and it complains something to the effect that it is already
started (which seems to be not true) and maybe it gives me
a pid for the postmaster. I think I tried:
pg_ctl stop
and it told me something a bit disageeable. But I found a
file with that pid in /tmp directory and when I remove that
file, then "pg_ctl start" succeeds.

If this sounds familiar to you (or you have some clue as to
what is going on), I would appreciate a tip.

Regards,

Tena Sakai
tsakai@gallo.ucsf.edu

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [PERFORM] Slow updates, poor IO

On Thursday 25 September 2008, John Huttley <John@mib-infotech.co.nz> wrote:
>
> Comments anyone?

Don't do full table updates? This is not exactly a news flash.


--
Alan

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Slow updates, poor IO

On Thu, Sep 25, 2008 at 1:24 PM, John Huttley <John@mib-infotech.co.nz> wrote:
> I've just had an interesting encounter with the slow full table update
> problem that is inherent with MVCC
>
> The system is 64 bit linux with 2.6.25 kernel feeding scsi disks.
>
> the table is
>
> CREATE TABLE file (
> fileid integer NOT NULL,
> fileindex integer DEFAULT 0 NOT NULL,
> jobid integer NOT NULL,
> pathid integer NOT NULL,
> filenameid integer NOT NULL,
> markid integer DEFAULT 0 NOT NULL,
> lstat text NOT NULL,
> md5 text NOT NULL,
> perms text
> );
>
> ALTER TABLE ONLY file
> ADD CONSTRAINT file_pkey PRIMARY KEY (fileid);
>
> CREATE INDEX file_fp_idx ON file USING btree (filenameid, pathid);
> CREATE INDEX file_jobid_idx ON file USING btree (jobid);
>
> There are 2.7M rows.
>
> running update file set perms='0664' took about 10 mins

So, how many rows would already be set to 0664? Would adding a where
clause speed it up?

update file set perms='0664' where perms <> '0664';

> during this period, vmstat reported Blocks Out holding in the 4000 to 6000
> range.
>
>
> When I dropped the indexes this query ran in 48sec.
> Blocks out peaking at 55000.
>
> So there is a double whammy.
> MVCC requires more work to be done when indexes are defined and then this
> work
> results in much lower IO, compounding the problem.

That's because it becomes more random and less sequential. If you had
a large enough drive array you could get that kind of performance for
updating indexes, since the accesses would tend to hit different
drives most the time.

Under heavy load on the production servers at work we can see 30 to 60
Megs a second random access with 12 drives, meaning 2.5 to 5Megs per
second per drive. Sequential throughput is about 5 to 10 times
higher.

What you're seeing are likely the effects of running a db on
insufficient drive hardware.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [ADMIN] missing chunk number 0 for toast value

"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> On Thu, Sep 25, 2008 at 10:09 AM, BJ Taylor
> <btaylor@propertysolutions.com> wrote:
>> PANIC: right sibling's left-link doesn't match: block 175337 links to
>> 243096 instead of expected 29675 in index "dbmail_headervalue_3"
>> STATEMENT: INSERT INTO dbmail_headervalue (headername_id, physmessage_id,
>> headervalue) VALUES (4,12335778,'from [76.13.13.25] by
>> n6.bullet.mail.ac4.yahoo.com with NNFMP; 25 Sep 2008 04:01:36 -0000')
>> LOG: server process (PID 13888) was terminated by signal 6: Aborted
>> LOG: terminating any other active server processes
>> WARNING: terminating connection because of crash of another server process

> Tom, does postgres generate abort signal? Or would this be an external signal?

Yeah, we call abort() after reporting a PANIC error, so as to get a core
dump.

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[PHP] Can't get PHP PDO LOB working with PostgreSQL (WRONG CODE)

I am very sorry I submitted the wrong code in the original
email. The email below is the right one! Please disregard my first
email to this list.

Hi.

I have spent about two working days trying to get PostgreSQL working
with PDO inserting binary content and pulling it back out without
success.

I have tested this on Debian Etch using PHP 5.2.0 and PostgreSQL(libpq)
8.1.11. I have enabled the PostgreSQL PDO driver.

I have set up a simple table to hold the content:

id serial
blob_type character varying
attachment oid

I am using a simple form to process the upload, and my PHP upload
script looks like this (modified a little from the PHP manual):

try {

$pdo = new PDO ("$pdo_database:host=$pdo_hostname;dbname=
$pdo_dbname","$pdo_username","$pdo_password");
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING);

} catch (Exception $e) {

echo 'Caught exception: ',$e->getMessage(), "\n";

}

require_once ("knl_mime_type.php"); // Gets the correct mime type.
$mime_type = new knl_mime_type();
$blob_type = $mime_type->getMimeType($tmpfilename);

$pdo->beginTransaction();
$oid = $pdo->pgsqlLOBCreate();
$stream = $pdo->pgsqlLOBOpen($oid, 'w');
$local = fopen($tmpfilename, 'rb');
stream_copy_to_stream($local, $stream);
$local = null;
$stream = null;

$attachment = fopen($_FILES['file']['tmp_name'], "rb");
$filename = $_FILES['file']['name'];

$stmt = $pdo->prepare("INSERT INTO attachments (blob_type, filename,
attachment) VALUES (:blob_type, :filename, :attachment)");

$stmt->bindParam(':blob_type', $blob_type, PDO::PARAM_STR);
$stmt->bindParam(':filename', $filename, PDO::PARAM_STR);
$stmt->bindParam(':attachment', $attachment, PDO::PARAM_LOB);

$stmt->execute();

$pdo->commit();

When I submit the form, I can see (using PHPPgAdmin) the binary file, in
this test case a PNG image being inserted.

From the table I see this info using PHPPgAdmin:

id blob_type filename attachment
25 image/png shot2.png 16441

I don't know how the binary data are supposed to look like since I am
migrating from MySQL, and only have that as a comparison.

If I understand the PostgreSQL manual correctly the above number
"16441" is a OID reference number to the binary data.

I don't know if the above is correct or if PostgreSQL has received the
binary data correctly, maybe someone can confirm this for me please?

Anyway, when I try to pull that data from the database (again using the
example from the PHP manual) I just get the reference number "16441"
back to the browser.

I am using the following code to retrieve the data:

$stmt = $pdo->prepare("SELECT blob_type, attachment FROM attachments
WHERE id = :id LIMIT 1");
$stmt->bindParam(':id', $_GET['id'], PDO::PARAM_STR);

$stmt->execute();

$results = $stmt->fetchAll();

foreach ($results as $row) {
$blob_type = $row['blob_type'];
$attachment = $row['attachment'];
}

header("Content-type: $blob_type");
echo $attachment;

If I use Firefox and take a look at the source code behind the page I
get served, I just see the number "16441" and nothing else.

Please notice that the above example are taken directly from the PHP
manual just modified a little. I have also tried using the examples
directly without any modifications, but the result is the same.

http://dk.php.net/manual/en/function.pdo-pgsqllobcreate.php

What am I missing or doing wrong here?

Best regards.

Rico.

--
Sent via pgsql-php mailing list (pgsql-php@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-php

Re: [GENERAL] ease of use sync

On Thu, Sep 25, 2008 at 12:09 PM, zach cruise <zachc1980@gmail.com> wrote:
> on projects where i ended up selecting oracle, *my* main reasons were
> (1) clustering/replication
> (2) cross-database query
> (3) promise of drcp
> in that order
>
> for (1), actually more for synchronization/transfer, i got a simple
> suggestion:
> while installing postgresql,

But for 99.9% of people installing they don't want to deal with it,
and if they're on a unix OS using packages, it would have to halt
install to ask a question, which is considered bad form.

So why not instead build a script to do all the things you do by hand
afterwards. I'd think that would get more traction.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[PERFORM] Slow updates, poor IO

I've just had an interesting encounter with the slow full table update
problem that is inherent with MVCC

The system is 64 bit linux with 2.6.25 kernel feeding scsi disks.

the table is

CREATE TABLE file (
fileid integer NOT NULL,
fileindex integer DEFAULT 0 NOT NULL,
jobid integer NOT NULL,
pathid integer NOT NULL,
filenameid integer NOT NULL,
markid integer DEFAULT 0 NOT NULL,
lstat text NOT NULL,
md5 text NOT NULL,
perms text
);

ALTER TABLE ONLY file
ADD CONSTRAINT file_pkey PRIMARY KEY (fileid);

CREATE INDEX file_fp_idx ON file USING btree (filenameid, pathid);
CREATE INDEX file_jobid_idx ON file USING btree (jobid);

There are 2.7M rows.

running update file set perms='0664' took about 10 mins

during this period, vmstat reported Blocks Out holding in the 4000 to
6000 range.


When I dropped the indexes this query ran in 48sec.
Blocks out peaking at 55000.

So there is a double whammy.
MVCC requires more work to be done when indexes are defined and then
this work
results in much lower IO, compounding the problem.


Comments anyone?


--john

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[PHP] Can't get PHP PDO LOB working with PostgreSQL

Hi.

I have spent about two working days trying to get PostgreSQL working
with PDO inserting binary content and pulling it back out without
success.

I have tested this on Debian Etch using PHP 5.2.0 and PostgreSQL(libpq)
8.1.11. I have enabled the PostgreSQL PDO driver.

I have set up a simple table to hold the content:

id serial
blob_type character varying
attachment oid

I am using a simple form to process the upload, and my PHP upload
script looks like this (modified a little from the PHP manual):

try {

$pdo = new PDO ("$pdo_database:host=$pdo_hostname;dbname=
$pdo_dbname","$pdo_username","$pdo_password");
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING);

} catch (Exception $e) {

echo 'Caught exception: ',$e->getMessage(), "\n";

}

ini_set('max_execution_time', '0');
ini_set('memory_limit', '128M');

require_once ("knl_mime_type.php"); // Gets the correct mime type.
$mime_type = new knl_mime_type();
$blob_type = $mime_type->getMimeType($tmpfilename);

$pdo->beginTransaction();

$attachment = fopen($_FILES['file']['tmp_name'], "rb");
$filename = $_FILES['file']['name'];

$stmt = $pdo->prepare("INSERT INTO attachments (blob_type, filename,
attachment) VALUES (:blob_type, :filename, :attachment)");

$stmt->bindParam(':blob_type', $blob_type, PDO::PARAM_STR);
$stmt->bindParam(':filename', $filename, PDO::PARAM_STR);
$stmt->bindParam(':attachment', $attachment, PDO::PARAM_LOB);

$stmt->execute();

$pdo->commit();

When I submit the form, I can see (using PHPPgAdmin) the binary file, in
this test case a PNG image being inserted.

From the table I see this info using PHPPgAdmin:

id blob_type filename attachment
25 image/png shot2.png 16441

I don't know how the binary data are supposed to look like since I am
migrating from MySQL, and only have that as a comparison.

If I understand the PostgreSQL manual correctly the above number
"16441" is a OID reference number to the binary data.

I don't know if the above is correct or if PostgreSQL has received the
binary data correctly, maybe someone can confirm this for me please?

Anyway, when I try to pull that data from the database (again using the
example from the PHP manual) I just get the reference number "16441"
back to the browser.

I am using the following code to retrieve the data:

$stmt = $pdo->prepare("SELECT blob_type, attachment FROM attachments
WHERE id = :id LIMIT 1");
$stmt->bindParam(':id', $_GET['id'], PDO::PARAM_STR);

$stmt->execute();

$results = $stmt->fetchAll();

foreach ($results as $row) {
$blob_type = $row['blob_type'];
$attachment = $row['attachment'];
}

header("Content-type: $blob_type");
echo $attachment;

If I use Firefox and take a look at the source code behind the page I
get served, I just see the number "16441" and nothing else.

Please notice that the above example are taken directly from the PHP
manual just modified a little. I have also tried using the examples
directly without any modifications, but the result is the same.

http://dk.php.net/manual/en/function.pdo-pgsqllobcreate.php

What am I missing or doing wrong here?

Best regards.

Rico.

--
Sent via pgsql-php mailing list (pgsql-php@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-php

[PERFORM] CPU load

Hello,

postmaster heavily loads processor. The database is accessed from java
aplication (with several threads), C applications and from PHP scripts.

It seems that one php script, called periodicaly, rises the load but the
script is very simple, something like this:

$var__base = new baza($dbhost,$dbport,$dbname,$dbuser,$dbpasswd);
$pok_baza = new upit($var__base->veza);
$upit_datum="SELECT * FROM system_alarm WHERE date= '$danas' AND
time>=(LOCALTIME - interval '$vrijeme_razmak hours') ORDER BY date DESC,
time DESC";

The statment is executed in approximately 0.6 sec.

The number of open connections is constantly 107.

The operating system is Debian GNU/Linux kernel 2.6.18-4-686.
Database version is PostgreSQL 8.2.4.


Thank you very much for any help.

Maja Stula


_________________________________________________________________________

The result of the top command:

top - 20:44:58 up 5:36, 1 user, load average: 1.31, 1.39, 1.24
Tasks: 277 total, 2 running, 275 sleeping, 0 stopped, 0 zombie
Cpu(s): 11.5%us, 2.2%sy, 0.0%ni, 86.3%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 3370808k total, 1070324k used, 2300484k free, 49484k buffers
Swap: 1951888k total, 0k used, 1951888k free, 485396k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4990 postgres 25 0 41160 19m 18m R 100 0.6 1:36.74 postmaster
15278 test 24 0 1000m 40m 5668 S 9 1.2 1:42.37 java
18892 root 15 0 2468 1284 884 R 0 0.0 0:00.05 top
1 root 15 0 2044 696 596 S 0 0.0 0:02.51 init
2 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.12 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1
5 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1
6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/2
7 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2

__________________________________________________________________________

The result of vmstat command:

kamis03:/etc# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 2271356 49868 505252 0 0 2 32 40 83 6 1
93 0
2 0 0 2271232 49868 505304 0 0 0 2348 459 1118 14 2
84 0
3 0 0 2271232 49868 505304 0 0 0 16 305 1197 11 2
87 0
3 0 0 2270984 49868 505432 0 0 0 8 407 1821 15 3
82 0
2 0 0 2270984 49868 505432 0 0 0 0 271 1328 11 2
87 0
1 0 0 2270984 49868 505440 0 0 0 24 375 1530 5 1
94 0
2 0 0 2270488 49868 505440 0 0 0 1216 401 1541 12 2
86 0

__________________________________________________________________________

The cpu configuration is:

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3194.46

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3191.94

processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3192.01

processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3192.01

processor : 4
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 1
siblings : 4
core id : 0
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3191.98

processor : 5
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 1
siblings : 4
core id : 1
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3191.98

processor : 6
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 1
siblings : 4
core id : 2
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3191.97

processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz
stepping : 7
cpu MHz : 1596.076
cache size : 4096 KB
physical id : 1
siblings : 4
core id : 3
cpu cores : 4
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 3191.97

__________________________________________________________________________


Postgresql.conf file:

# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect, or use
# "pg_ctl reload". Some settings, such as listen_addresses, require
# a postmaster shutdown and restart to take effect.


#---------------------------------------------------------------------------
# FILE LOCATIONS
#---------------------------------------------------------------------------

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir' # use data in another directory
hba_file = '/etc/postgresql/8.1/main/pg_hba.conf' # host-based
authentication file
ident_file = '/etc/postgresql/8.1/main/pg_ident.conf' # IDENT
configuration file

# If external_pid_file is not explicitly set, no extra pid file is written.
external_pid_file = '/var/run/postgresql/8.1-main.pid' # write an extra
pid file


#---------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#---------------------------------------------------------------------------

# - Connection Settings -

#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
port = 5432

# Maksimalni broj konekcija je podignut na 1000
# Maja 15.6
max_connections = 1000
#max_connections = 100
# note: increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction). You
# might also need to raise shared_buffers to support more connections.
#superuser_reserved_connections = 2
unix_socket_directory = '/var/run/postgresql'
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal
#bonjour_name = '' # defaults to the computer name

# - Security & Authentication -

#authentication_timeout = 60 # 1-600, in seconds
ssl = false
#password_encryption = on
#db_user_namespace = off

# Kerberos
#krb_server_keyfile = ''
#krb_srvname = 'postgres'
#krb_server_hostname = '' # empty string matches any keytab entry
#krb_caseins_users = off

# - TCP Keepalives -
# see 'man 7 tcp' for details

#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default


#---------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#---------------------------------------------------------------------------

# - Memory -

#shared_buffers = 1000 # min 16 or max_connections*2, 8KB each
# broj buffera mora biti dva puta veci od max. broj konekcija
shared_buffers = 2000
#temp_buffers = 1000 # min 100, 8KB each
#max_prepared_transactions = 5 # can be 0 or more
# note: increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
#work_mem = 1024 # min 64, size in KB
#maintenance_work_mem = 16384 # min 1024, size in KB
#max_stack_depth = 2048 # min 100, size in KB

# - Free Space Map -

#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min 100, ~70 bytes each

# - Kernel Resource Usage -

#max_files_per_process = 1000 # min 25
#preload_libraries = ''

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0 # 0-1000 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 0-10000 credits

# - Background writer -

#bgwriter_delay = 200 # 10-10000 milliseconds between rounds
#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round
#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round
#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round


#---------------------------------------------------------------------------
# WRITE AHEAD LOG
#---------------------------------------------------------------------------

# - Settings -

#fsync = on # turns forced synchronization on or off
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_buffers = 8 # min 4, 8KB each
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000

# - Checkpoints -

#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300 # range 30-3600, in seconds
#checkpoint_warning = 30 # in seconds, 0 is off

# - Archiving -

#archive_command = '' # command to use to archive a logfile
# segment


#---------------------------------------------------------------------------
# QUERY TUNING
#---------------------------------------------------------------------------

# - Planner Method Configuration -

#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on

# - Planner Cost Constants -

#effective_cache_size = 1000 # typically 8KB each
#random_page_cost = 4 # units are one sequential page fetch
# cost
#cpu_tuple_cost = 0.01 # (same)
#cpu_index_tuple_cost = 0.001 # (same)
#cpu_operator_cost = 0.0025 # (same)

# - Genetic Query Optimizer -

#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0

# - Other Planner Options -

#default_statistics_target = 10 # range 1-1000
#constraint_exclusion = off
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOINs


#---------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#---------------------------------------------------------------------------

# - Where to Log -

#log_destination = 'stderr' # Valid values are combinations of
# stderr, syslog and eventlog,
# depending on platform.

# This is used when logging to stderr:
#redirect_stderr = off # Enable capturing of stderr into log
# files

# These are only used if redirect_stderr is on:
#log_directory = 'pg_log' # Directory where log files are written
# Can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.
# Can include strftime() escapes
#log_truncate_on_rotation = off # If on, any existing log file of the same
# name as the new log file will be
# truncated rather than appended to. But
# such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1440 # Automatic rotation of logfiles will
# happen after so many minutes. 0 to
# disable.
#log_rotation_size = 10240 # Automatic rotation of logfiles will
# happen after so many kilobytes of log
# output. 0 to disable.

# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


# - When to Log -

#client_min_messages = notice # Values, in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error

#log_min_messages = notice # Values, in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic

#log_error_verbosity = default # terse, default, or verbose messages

#log_min_error_statement = panic # Values in order of increasing severity:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# panic(off)

#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, in milliseconds.

#silent_mode = off # DO NOT USE without syslog or
# redirect_stderr

# - What to Log -

#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = off
#log_connections = off
#log_disconnections = off
#log_duration = off
log_line_prefix = '%t ' # Special values:
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = PID
# %t = timestamp (no milliseconds)
# %m = timestamp with milliseconds
# %i = command tag
# %c = session id
# %l = session line number
# %s = session start timestamp
# %x = transaction id
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_statement = 'none' # none, mod, ddl, all
#log_hostname = off


#---------------------------------------------------------------------------
# RUNTIME STATISTICS
#---------------------------------------------------------------------------

# - Statistics Monitoring -

#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off

# - Query/Index Statistics Collector -

#stats_start_collector = on
#stats_command_string = off
#stats_block_level = off
stats_row_level = on
#stats_reset_on_server_start = off


#---------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#---------------------------------------------------------------------------

autovacuum = on # enable autovacuum subprocess?
#autovacuum_naptime = 60 # time between autovacuum runs, in secs
#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before
# vacuum
#autovacuum_analyze_threshold = 500 # min # of tuple updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before
# vacuum
#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before
# analyze
#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for
# autovac, -1 means use
# vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovac, -1 means use
# vacuum_cost_limit


#---------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#---------------------------------------------------------------------------

# - Statement Behavior -

#search_path = '$user,public' # schema names
#default_tablespace = '' # a tablespace name, '' uses
# the default
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#statement_timeout = 0 # 0 is disabled, in milliseconds

# - Locale and Formatting -

#datestyle = 'iso, mdy'
#timezone = unknown # actually, defaults to TZ
# environment setting
#australian_timezones = off
#extra_float_digits = 0 # min -15, max 2
#client_encoding = sql_ascii # actually, defaults to database
# encoding

# These settings are initialized by initdb -- they might be changed
lc_messages = 'en_US.UTF-8' # locale for system error message
# strings
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting

# - Other Defaults -

#explain_pretty_print = on
#dynamic_library_path = '$libdir'


#---------------------------------------------------------------------------
# LOCK MANAGEMENT
#---------------------------------------------------------------------------

#deadlock_timeout = 1000 # in milliseconds
#max_locks_per_transaction = 64 # min 10
# note: each lock table slot uses ~220 bytes of shared memory, and there are
# max_locks_per_transaction * (max_connections + max_prepared_transactions)
# lock table slots.


#---------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#---------------------------------------------------------------------------

# - Previous Postgres Versions -

#add_missing_from = off
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = off
#regex_flavor = advanced # advanced, extended, or basic
#sql_inheritance = on

# - Other Platforms & Clients -

#transform_null_equals = off


#---------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#---------------------------------------------------------------------------

#custom_variable_classes = '' # list of custom variable class names


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [ADMIN] missing chunk number 0 for toast value

On Thu, Sep 25, 2008 at 10:09 AM, BJ Taylor
<btaylor@propertysolutions.com> wrote:
> PANIC: right sibling's left-link doesn't match: block 175337 links to
> 243096 instead of expected 29675 in index "dbmail_headervalue_3"
> STATEMENT: INSERT INTO dbmail_headervalue (headername_id, physmessage_id,
> headervalue) VALUES (4,12335778,'from [76.13.13.25] by
> n6.bullet.mail.ac4.yahoo.com with NNFMP; 25 Sep 2008 04:01:36 -0000')
> LOG: server process (PID 13888) was terminated by signal 6: Aborted
> LOG: terminating any other active server processes
> WARNING: terminating connection because of crash of another server process

Tom, does postgres generate abort signal? Or would this be an external signal?

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] Indirect access to NEW or OLD records

I have tried plperl, but the following sample does not work:

CREATE FUNCTION "extract_field_as_varchar" (rec record, field varchar)
RETURNS varchar AS
$body$
...
$body$
LANGUAGE 'plperl' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

It says "ERROR: plperl functions cannot take type record".

So, I cannot ever create an utility function to call it as:

value := extract_field_as_varchar(NEW, 'field');

Seems I have to write an entire trigger in plperl to access NEW record
indirectly?


On Thu, Sep 25, 2008 at 10:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Dmitry Koterov" <dmitry@koterov.ru> writes:
>> I have a variable with a field name and want to extract this field
>> value from NEW record:
>
>> DECLARE
>> field VARCHAR = 'some_field';
>> BEGIN
>> ...
>> value := NEW.{field}; -- ???
>> END;
>
>> Is it possible in pl/pgsql?
>
> No. Quite aside from the lack of syntax for that, plpgsql would be
> unhappy if the data type wasn't the same on successive executions.
>
> Use one of the less strongly typed PLs instead. I believe this is
> pretty easy in plperl or pltcl. (Maybe plpython too, but I don't know
> that language.)
>
> regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [BUGS] [ADMIN] Postgres Stats after Crash Recovery

I confirm this as a bug. First ANALYZE after crash recovery leaves stats
showing as zeroes. Repeatable on CVS HEAD with ANALYZE and VACUUM
ANALYZE.

Forwarding to bugs.


On Wed, 2008-09-24 at 15:29 -0400, Chirag Dave wrote:
>
> Testing AutoVac on 8.3 , i came across the problem of loosing stats
> data, which was discussed in my last post
>
> http://archives.postgresql.org/pgsql-general/2008-08/msg00198.php
>
> that problem was recognized that doing hard stop, server will throw
> away the stats while going through crash recovery.
>
> Problem i see is after crash recovery , we have to manually analyze
> database in order for autovac to work but it not working as expected.
>
> Here is test case:
>
> foo=# SELECT version();
>
> version
> ------------------------------------------------------------------------------------------------
> PostgreSQL 8.3.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
> 4.1.2 (Ubuntu 4.1.2-0ubuntu4)
> (1 row)
>
> foo=# SELECT * from pg_stat_user_tables where relname='accounts';
> -[ RECORD 1 ]----+------------------------------
> relid | 57350
> schemaname | public
> relname | accounts
> seq_scan | 1
> seq_tup_read | 1000000
> idx_scan | 0
> idx_tup_fetch | 0
> n_tup_ins | 1000000
> n_tup_upd | 0
> n_tup_del | 0
> n_tup_hot_upd | 0
> n_live_tup | 1000000
> n_dead_tup | 0
> last_vacuum | 2008-09-24 15:04:35.384012-04
> last_autovacuum |
> last_analyze | 2008-09-24 15:04:35.384012-04
> last_autoanalyze |
>
>
> Next i will stop DB immediate and expect to loose stats as normal
> behavior.
>
> pg_ctl -m immediate -D /opt/rg/foo/pgsql83 stop
> waiting for server to shut down...LOG: received immediate shutdown
> request
> done
> server stopped
>
> After stating the DB, as expected:
> foo=# SELECT * from pg_stat_user_tables where relname='accounts';
> -[ RECORD 1 ]----+---------
> relid | 57350
> schemaname | public
> relname | accounts
> seq_scan | 0
> seq_tup_read | 0
> idx_scan | 0
> idx_tup_fetch | 0
> n_tup_ins | 0
> n_tup_upd | 0
> n_tup_del | 0
> n_tup_hot_upd | 0
> n_live_tup | 0
> n_dead_tup | 0
> last_vacuum |
> last_autovacuum |
> last_analyze |
> last_autoanalyze |
>
> Next step is to manually analyse to collects the stats again:
> foo=# ANALYZE ;
> ANALYZE
> foo=# SELECT * from pg_stat_user_tables where relname='accounts';
> -[ RECORD 1 ]----+---------
> relid | 57350
> schemaname | public
> relname | accounts
> seq_scan | 0
> seq_tup_read | 0
> idx_scan | 0
> idx_tup_fetch | 0
> n_tup_ins | 0
> n_tup_upd | 0
> n_tup_del | 0
> n_tup_hot_upd | 0
> n_live_tup | 0
> n_dead_tup | 0
> last_vacuum |
> last_autovacuum |
> last_analyze |
> last_autoanalyze |
>
> note: After ANALYSE, ststs were not updated.
>
> Running ANALYSE, second time seems to does the trick.
>
> foo=# ANALYZE ;
> ANALYZE
> foo=# SELECT * from pg_stat_user_tables where relname='accounts';
> -[ RECORD 1 ]----+------------------------------
> relid | 57350
> schemaname | public
> relname | accounts
> seq_scan | 0
> seq_tup_read | 0
> idx_scan | 0
> idx_tup_fetch | 0
> n_tup_ins | 0
> n_tup_upd | 0
> n_tup_del | 0
> n_tup_hot_upd | 0
> n_live_tup | 1000062
> n_dead_tup | 0
> last_vacuum |
> last_autovacuum |
> last_analyze | 2008-09-24 15:13:13.423424-04
> last_autoanalyze |
>
>
> So question i have is, is this normal operation,why we need to analyze
> twice to updates the stats ? if table/tables are candidate for
> vacuuming after crash recovery will never get auto-vac unless you do
> 'ANALYZE' twice.
>
> Thanks in advance,
>
> Chirag Dave
> DBA
> Afilias

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [JDBC] Performance comparison to psql.

I seem unable to perform basic calculus today... sorry again, with psql -1 the process took 97m
just to set the record straight -- the overhead from the implicit transactions was 15m

M.

On Thu, Sep 25, 2008 at 7:12 PM, Michael Nacos <m.nacos@gmail.com> wrote:
right! the -1 flag...

I repeated the psql test, this time with the -1 flag, and the process completed in 107m
so the transaction overhead in the previous psql tests is 5m
psql is probably reading each line from the input file and immediately submitting it
pgBee groups together many lines and batches them off to the server in one step

btw, it's operations/sec, not milliseconds in my previous email -- sorry! I am not using
prepared statements as I have to cope with arbitrary SQL, so it looks like I'm approaching
the performance of unbatched but prepared JDBC statements mentioned in this table:


Comparison table (records inserted per millisecond)
COPY JDBC JDBC batch
WITHOUT INDEXES: 198 1.5 14
WITH 2 INDEXES: 45 1.5 10

898 operations/second vs. 1500 records/second in the table above. Besides, these numbers
must be hardware-specific (I'm using a laptop with a 5400rpm disk)

Michael


Re: [GENERAL] My first revoke

On 25/09/2008 19:24, Fernando Moreno wrote:
> only have to grant select on the tables you want, and yes, one by one.

If you use pgAdmin, it has a wizard for doing a bunch of objects in one go.

Ray.


------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] Hex representation

Change the shbang line to match where your php in installed (if it is at
all)

#!/full_path_to_my_php -q

-----Original Message-----
From: pgsql-admin-owner@postgresql.org
[mailto:pgsql-admin-owner@postgresql.org] On Behalf Of Carol Walter
Sent: Thursday, September 25, 2008 1:15 PM
To: Steve Crawford
Cc: Scott Marlowe; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Hex representation

Well, it was a bit convoluted, but I created the file with Excel,
filling the right number of cells with \x55. This worked too. The
script wouldn't run for me. I got an error about a "bad interpreter".

Carol

On Sep 25, 2008, at 1:40 PM, Steve Crawford wrote:

> Scott Marlowe wrote:
>> I used this very simple little php script to make this
>>
>> filename: mk55:
>> #!/usr/bin/php -q
>> <?php
>> for ($i=0;$i<262144;$i++){
>> print chr(85);
>> }
>> ?>
>>
> Or, using standard *nix tools (Note: 0x55 = ascii U):
> dd bs=1k count=256 if=/dev/zero | tr '\000' U > full_of_0x55
>
> Cheers,
> Steve
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] My first revoke

Hi, first of all, a new role doesn't have any privilege on any table (every type of database object has different default privileges), so you only have to grant select on the tables you want, and yes, one by one.

You can also grant or revoke privileges this way: grant select on table1,table2,table3...tableN to my_role;

Re: [pgsql-es-ayuda] postgresql 8.1.7 - ¿Donde?

Viktor Sanikidze wrote:
> Hola. Estoy encantado de encontrar esta comunidad en español.
> MI consulta es la siguiente:
> ¿Puede alguien decirme donde descargar la versión 8.1.7 del postreSQL?
>
> Lo encontré en varios sitios pero no funciona ningún mirror. Sería para redhat 5 y quiero instalarlo porque necesito usar un programa (juniper NetScreen) y no me deja con una versión posterior.
> Gracias a tod@s de antemano.
>
> ########## PERFORMING PRE-INSTALLATION TASKS ##########
> Creating staging directory...ok
> Running preinstallcheck...
> Checking if platform is valid...............................ok
> Checking for correct intended platform......................ok
> Checking for CPU architecture...............................ok
> Checking if all needed binaries are present.................ok
> Checking for platform-specific binaries.....................ok
> Checking for platform-specific packages.....................ok
> Checking in System File for PostgreSQL and XDB parameters...ok
> Checking for PostgreSQL.....................................
> PostgreSQL version 8.1.7 not found in /usr
> Specify location of PostgreSQL 8.1.7 bin directory[/usr/bin]> /usr/SQL/8.3/bin
> FAILED
> PostgreSQL version '8.3.3' found, '8.1.7' needed to continue.
>
>
> _________________________________________________________________
> Discover the new Windows Vista
> http://search.msn.com/results.aspx?q=windows+vista&mkt=en-US&form=QBRE--
> TIP 7: no olvides aumentar la configuración del "free space map"
>
>
>


Puedes usar :

http://yum.pgsqlrpms.org/


Leonel

--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [GENERAL] Oracle and Postgresql

On Thu, Sep 25, 2008 at 3:52 PM, Andrew Sullivan <ajs@commandprompt.com> wrote:
On Thu, Sep 25, 2008 at 01:13:29PM +0300, Asko Oja wrote:
>
> but why would you put part of your business logic into some configuration
> tables while you could keep it in your own functions

Because the parameters of the business logic should not be in the
code.  The parameters should be part of the configuration, to be
administered by the administrators (i.e. the DBAs) and not by the
database developers.  In traditional large database shops, that is the
division of responsibility, and the inability to work in that way will
hamper Postgres adoption in that environment.  (Maybe we don't care,
but let's at least be honest that changing the culture of such
database shops is not something we're going to achieve quickly.)

Well by configuration tables i meant some oracle/postgresql system tables.
We also have parameters of business logic in configuration database that is replicated into each oltp database that needs them and they are updated by dba's during normal release process. Althou this part is managed by DBA's the changes themselves are prepared by developers. So i see no PostgreSQL ability to work that way.  What i see is lack of useless bells and whistles in PostgreSQL and i like it.

regards,
Asko

Re: [GENERAL] Oracle and Postgresql

On Sep 1, 2008, at 12:42 AM, Henry wrote:
> This is /finally/ being addressed, although (very) belatedly. The
> Pg core
> dev team always argued that replication was an add-on and should not
> form
> part of the core (ie, similar nonsense excuses the MySQL team used for
> "add-ons" such as triggers, etc).

I believe the developer stance is more the same than you seem to
imagine. The upcoming developments allow replication utilities to tie
in at a deeper and more effective level, and with that new replication
solutions will come along. But I do not think there is any goal to
implement a single replication solution within core and not support
external solutions.

The point of the PostgreSQL developer stance is that until something
can be done correctly, even if it's a lot more work, it's sometimes
better not to do at all. It was recognized early on that if we tried
to figure out the replication puzzle ourself, it would invariably be
complex and never ideally suited for every situation. It would cost a
lot of resources that the team really needed to spend elsewhere at the
time.

MySQL's stance on things like triggers and subselects and so on is
*not* that at all. They recognize that a proper implementation would
be complicated and take a lot of time, so they strongly want to avoid
it, and make lame excuses a lot. When they do finally get around to
implementing something, they have traditionally done it in a broken or
lazy way - e.g. you cannot have two triggers on the same type of
action on the same table, instead you must write a wrapper function
that calls other functions; subselects are always evaluated
independently meaning they usually equate to "horribly slow", there's
a lot of bugs, etc.

I prefer the way PostgreSQL development has been going, personally. :)

Cheers,
--
Casey Allen Shobe
Database Architect, The Berkeley Electronic Press

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Oracle and Postgresql

On Sep 4, 2008, at 7:40 PM, Robert Treat wrote:
> It is not as simple as Oracles database link syntax. Setting up a
> connection
> involves a couple of sql looking commands, and once you setup a
> connection to
> a remote database, you can reference a table with something like
> select *
> from mytable@myotherdb. There's no way a function oriented solution
> can
> match that imho.

I have long thought that what would be really useful is a standard way
for third-party modules to extend or override the SQL language support
within PostgreSQL itself without needing to be integrated in core.

E.g. it should be possible for all of EnterpriseDB's Oracle-compatible
SQL changes to exist as a separate module, somebody could change the
behavior of a select to default ordering to imitate Oracle etc. It
should be possible for a replication engine to add syntax for options
specific to it. Contrib modules like dblink could install SQL-like
command support.

This would be both invaluable for compatibility efforts and probably
raise the amount of 3rd party stuff that actually gets used
(currently, many places I've seen avoid Slony because they fear having
to use the commandline scripts it comes with, and if you want to
manipulate Slony from the database itself, oftentimes this means you
have to use pl/perlu or another untrusted language.

Don't get me wrong, functions are great too. :) But currently the
above means that a lot of risk is introduced and you have to put a lot
of faith in the perl code - an exploit poses a lot of risk. If Slony
exposed it's own data to PG via custom SQL extensions, this would be
more secure by design.

Cheers,
--
Casey Allen Shobe
Database Architect, The Berkeley Electronic Press

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] Hex representation

Well, it was a bit convoluted, but I created the file with Excel,
filling the right number of cells with \x55. This worked too. The
script wouldn't run for me. I got an error about a "bad interpreter".

Carol

On Sep 25, 2008, at 1:40 PM, Steve Crawford wrote:

> Scott Marlowe wrote:
>> I used this very simple little php script to make this
>>
>> filename: mk55:
>> #!/usr/bin/php -q
>> <?php
>> for ($i=0;$i<262144;$i++){
>> print chr(85);
>> }
>> ?>
>>
> Or, using standard *nix tools (Note: 0x55 = ascii U):
> dd bs=1k count=256 if=/dev/zero | tr '\000' U > full_of_0x55
>
> Cheers,
> Steve
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] Oracle and Postgresql

On Sep 15, 2008, at 6:58 AM, David Fetter wrote:
>> Roles,
>
> We have 'em.

We do NOT have secure application roles or anywhere near the level of
configurability in security aspects as Oracle. We've got a great
foundation, but we lack a lot of fine-grained granularity (e.g. an
Oracle SAR can allow a role to execute a particular function based on
the result of another function call or query, which has rather a lot
of possibilities - consider grant connect on database to staff when
hour_of_day () between 9 and 6; also consider row-level and column-
level and even field-level access controls).

It's complicated in Oracle, but there's a lot of possibilities there
that we simply cannot reproduce. But this could be extended one day. :)

Cheers,
--
Casey Allen Shobe
Database Architect, The Berkeley Electronic Press


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] Hex representation

Hi Steve,

That's a cool way to do it.  I wish I thought of it.
Here's yet another way by java:

public class lotsofhex55 {

        public static void main (String[] argv) {

                char myHex55;
                myHex55 = (char) 0x55;

                        // replace 256 with 262144 after testing
                for (int i = 0; i < 256; i++) {
                   System.out.print (myHex55);
                }
        } // end main()
} // end class

Regards,

Tena Sakai
tsakai@gallo.ucsf.edu


-----Original Message-----
From: pgsql-admin-owner@postgresql.org on behalf of Steve Crawford
Sent: Thu 9/25/2008 10:40 AM
To: Scott Marlowe
Cc: Carol Walter; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Hex representation

Scott Marlowe wrote:
> I used this very simple little php script to make this
>
> filename: mk55:
> #!/usr/bin/php -q
> <?php
> for ($i=0;$i<262144;$i++){
>     print chr(85);
> }
> ?>
>  
Or, using standard *nix tools (Note: 0x55 = ascii U):
dd bs=1k count=256 if=/dev/zero | tr '\000' U  > full_of_0x55

Cheers,
Steve


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] How to select rows that are the max for each subcategory?

"Kynn Jones" <kynnjo@gmail.com> writes:
> Suppose I have a table T that has, among its columns, the fields X and Y,
> where Y is an integer, and multiple rows with the same value of X are
> possible. I want to select the rows corresponding to the greatest values of
> Y for each value of X.

You could use DISTINCT ON --- see the "weather reports" example in the
SELECT reference page. It's not standard SQL but will usually be faster
than solutions that use only standard features.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [JDBC] Performance comparison to psql.

right! the -1 flag...

I repeated the psql test, this time with the -1 flag, and the process completed in 107m
so the transaction overhead in the previous psql tests is 5m
psql is probably reading each line from the input file and immediately submitting it
pgBee groups together many lines and batches them off to the server in one step

btw, it's operations/sec, not milliseconds in my previous email -- sorry! I am not using
prepared statements as I have to cope with arbitrary SQL, so it looks like I'm approaching
the performance of unbatched but prepared JDBC statements mentioned in this table:

Comparison table (records inserted per millisecond)
COPY JDBC JDBC batch
WITHOUT INDEXES: 198 1.5 14
WITH 2 INDEXES: 45 1.5 10

898 operations/second vs. 1500 records/second in the table above. Besides, these numbers
must be hardware-specific (I'm using a laptop with a 5400rpm disk)

Michael

[GENERAL] ease of use sync

on projects where i ended up selecting oracle, *my* main reasons were
(1) clustering/replication
(2) cross-database query
(3) promise of drcp
in that order
 
for (1), actually more for synchronization/transfer, i got a simple suggestion:
while installing postgresql, why not ask the user to give ip of a 2nd remote machine with the right ports open on which the installer can then repeat the exact installation, prompt for sync frequency, and do a pg_dumpall/pg_restore based on that frequency. so by the time you are done installing, you got a "stand-by"! repeat that for upgrades.
 
it is easy to do manually (setup cron, navicat etc) but far easier when combined during postgresql installation.

Re: [GENERAL] Indirect access to NEW or OLD records

"Dmitry Koterov" <dmitry@koterov.ru> writes:
> I have a variable with a field name and want to extract this field
> value from NEW record:

> DECLARE
> field VARCHAR = 'some_field';
> BEGIN
> ...
> value := NEW.{field}; -- ???
> END;

> Is it possible in pl/pgsql?

No. Quite aside from the lack of syntax for that, plpgsql would be
unhappy if the data type wasn't the same on successive executions.

Use one of the less strongly typed PLs instead. I believe this is
pretty easy in plperl or pltcl. (Maybe plpython too, but I don't know
that language.)

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Oracle and Postgresql

On Sep 25, 2008, at 3:13 AM, Asko Oja wrote:
> but why would you put part of your business logic into some
> configuration tables while you could keep it in your own functions

Because as bad as my Not Invented Here syndrome might be at times, I
know that I would not be able to alone build as elegant of a system as
the community could together, which would be more adaptable and
generic. Things tend not to make it into PostgreSQL releases until
they are very solid.

Then instead of worrying about whether or not my function code is
good, I can rely on the PostgreSQL-provided foundation, using things
in a standard, supported way. If there is a bug, it's found, fixed,
and the whole community benefits. Many people could benefit from SAR
support, so why make them all spend time reimplementing thigs?


There is a much, much more important reason for this though, which is
that I can only extend security functionality by adding additional
restrictions within my function call. The SAR stuff in Oracle is
superior to this because it affects *all use of the database*, not
just function calls. I don't put much value in security through
obscurity - sorry.

To an extent we *can* simulate row-level and column-level security
through the use of very restricted data tables and more generally-
available or specific-purpose views, but we cannot make PostgreSQL
call a custom function to determine from it's output whether or not it
should allow a particular action.

> I see it as a strong side of PostgreSQL that we have not bloated our
> code with all this fancy micromanagement that seems too complex to
> be useful anyway considering that quality of Oracle database
> management tools :)

Lack of bloat is a strong point. Bloat is more the result of putting
things in without enough forethought, and then needing to maintain
compatibility with old stuff even after you add better - the
maintenance of the old code interfering with the time/quality put into
the new, etc. It's also a question of how well they are implemented.
If 0.1% of the user base will use a feature, it's probably not worth
adding. If 10% will use it, but it's implementation requires added
overhead for the 90% of others who don't - that's bloat. If 10% will
use it, and it doesn't cost anything to those who don't, it's worth
doing.

I would like to clarify that I'm not saying anything like "PostgreSQL
needs / should have SAR support" in this thread, although I would
personally find them handy. I just wanted to clarify that what
"roles" means to an Oracle DBA is a lot different from what it means
to a PostgreSQL DBA.

Cheers,
--
Casey Allen Shobe
Database Architect, The Berkeley Electronic Press


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] How to select rows that are the max for each subcategory?

uuuuuups, you need to specify the table in the select, so

select t.x,t.y,t.z
from
t join (select x, max(y) as my from t group by x) t1 on (t.x=t1.x and
t.y=t1.my)

On Thu, Sep 25, 2008 at 20:05, Harald Armin Massa
<haraldarminmassa@gmail.com> wrote:
> select x,y,z
> from
> t join (select x, max(y) as my from t group by x) t1 on (t.x=t1.x and t.y=t1.my)
>
> best wishes
>
> Harald
>
> On Thu, Sep 25, 2008 at 20:01, Kynn Jones <kynnjo@gmail.com> wrote:
>> Suppose I have a table T that has, among its columns, the fields X and Y,
>> where Y is an integer, and multiple rows with the same value of X are
>> possible. I want to select the rows corresponding to the greatest values of
>> Y for each value of X. E.g. suppose that T is
>> X Y Z
>> a 1 eenie
>> a 3 meenie
>> a 2 miny
>> b 4 moe
>> b 0 catch
>> ...the result of the desired query would be
>> a 3 meenie
>> b 4 moe
>> TIA!
>> Kynnjo
>>
>>
>>
>
>
>
> --
> GHUM Harald Massa
> persuadere et programmare
> Harald Armin Massa
> Spielberger Straße 49
> 70435 Stuttgart
> 0173/9409607
> no fx, no carrier pigeon
> -
> EuroPython 2009 will take place in Birmingham - Stay tuned!
>

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
EuroPython 2009 will take place in Birmingham - Stay tuned!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] How to select rows that are the max for each subcategory?

select x,y,z
from
t join (select x, max(y) as my from t group by x) t1 on (t.x=t1.x and t.y=t1.my)

best wishes

Harald

On Thu, Sep 25, 2008 at 20:01, Kynn Jones <kynnjo@gmail.com> wrote:
> Suppose I have a table T that has, among its columns, the fields X and Y,
> where Y is an integer, and multiple rows with the same value of X are
> possible. I want to select the rows corresponding to the greatest values of
> Y for each value of X. E.g. suppose that T is
> X Y Z
> a 1 eenie
> a 3 meenie
> a 2 miny
> b 4 moe
> b 0 catch
> ...the result of the desired query would be
> a 3 meenie
> b 4 moe
> TIA!
> Kynnjo
>
>
>

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
EuroPython 2009 will take place in Birmingham - Stay tuned!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

RE: [pgsql-es-ayuda] Backup o Restore de Vistas y Funciones

Moises,

Si ya probe eso y también te saca Backup de todas las tablas.
Esperaba encontrar la forma de cómo hacerlo solo de las funciones y Vistas.

Grcs

-----Mensaje original-----
De: pgsql-es-ayuda-owner@postgresql.org
[mailto:pgsql-es-ayuda-owner@postgresql.org] En nombre de Moises Alberto
Lindo Gutarra
Enviado el: jueves, 25 de septiembre de 2008 12:09
Para: pgsql-es-ayuda@postgresql.org
Asunto: Re: [pgsql-es-ayuda] Backup o Restore de Vistas y Funciones

El día 25 de septiembre de 2008 11:52, Marcelino Guerrero (GMail)
<mguerreroh@gmail.com> escribió:
> Amigos,
>
> Una consulta necesito sacar Backup solo de Funciones y Vistas, es
> posible hacerlo?
>
> En caso lo anterior no se a posible, es posible hacer un restore solo
> de funciones y vistas?
>
> Utilizo postgresql-8.1 y postgresql-8.0
>
> Slds + Grcs
>
>
> --
> TIP 4: No hagas 'kill -9' a postmaster
>

que yo sepa se puede todo el esquema que incluyen las vistas y funciones,
adicionalmente otros objetos.

pg_dump <database> -s > <file>


--
Atentamente,
Moisés Alberto Lindo Gutarra

--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

[GENERAL] How to select rows that are the max for each subcategory?

Suppose I have a table T that has, among its columns, the fields X and Y, where Y is an integer, and multiple rows with the same value of X are possible.  I want to select the rows corresponding to the greatest values of Y for each value of X.  E.g. suppose that T is

X Y Z
a  1 eenie
a  3 meenie
a  2 miny
b  4 moe
b  0 catch

...the result of the desired query would be

a  3 meenie
b  4 moe

TIA!

Kynnjo



Re: [GENERAL] Dynamically created cursors vanish in PLPgSQL

Hello

try to look at http://okbob.blogspot.com/2008/08/using-cursors-for-generating-cross.html

regards
Pavel Stehule

p.s. you should to use transaction

2008/9/25 Reg Me Please <regmeplease@gmail.com>:
> Hi all.
>
> I'm running PGSQL v.8.3.3
>
> I tried to adapt the examples from the friendly manual (38.7.3.5) in order to
> to have a function to create cursors based on a parametric query string:
>
> CREATE SEQUENCE s_cursors;
>
> CREATE OR REPLACE FUNCTION f_cursor( query text, out curs refcursor )
> LANGUAGE PLPGSQL STRICT
> AS $BODY$
> DECLARE
> c refcursor;
> BEGIN
> c := 'cursor_'||nextval( 's_cursors' );
> OPEN c SCROLL FOR EXECUTE query;
> curs := c;
> END;
> $BODY$;
>
> SELECT f_cursor( 'SELECT * FROM pg_tables' );
>
> curs
> -----------
> cursor_1
> (1 row)
>
> FETCH 10 FROM cursor_1;
>
> ERROR: cursor "cursor_1" does not exist
>
> SELECT * from pg_cursors ;
> name | statement | is_holdable | is_binary | is_scrollable | creation_time
> ------+-----------+-------------+-----------+---------------+---------------
> (0 rows)
>
> The cursor is (should have been) created as there's no error but it seems it
> vanishes as soon as the creating function returns.
> As if it was created "WITHOUT HOLD", which doesn't make much of sense in a
> function returning a refcursor, this is why there is (seems to be) no "HOLD"
> part in the cursor creation in PLPgSQL.
>
> I think more likely I am making some mistake. But have n ìo idea where.
>
> Any hint?
>
> Thanks in advance
>
> RMP.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Counting rows in a PL/PgSQL CURSOR without fetching?

Reg Me Please <regmeplease@gmail.com> writes:
> Unuckily
> MOVE LAST FROM curs1;
> won't work with
> GET DIAGNOSTICS cnt = ROW_COUNT;

Hmm, you're right that MOVE doesn't set row_count (because plpgsql
itself isn't fetching the rows), but it kinda seems like it should.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Indirect access to NEW or OLD records

Hello.

I have a variable with a field name and want to extract this field
value from NEW record:

DECLARE
field VARCHAR = 'some_field';
BEGIN
...
value := NEW.{field}; -- ???
END;

Is it possible in pl/pgsql?

I have found one speed-inefficient solution: convert NEW to string and
then - use EXECURE with customly-build query to extract a value from
that constant string. But it is too slow. Is there better solution?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] Hex representation

Scott Marlowe wrote:
> I used this very simple little php script to make this
>
> filename: mk55:
> #!/usr/bin/php -q
> <?php
> for ($i=0;$i<262144;$i++){
> print chr(85);
> }
> ?>
>
Or, using standard *nix tools (Note: 0x55 = ascii U):
dd bs=1k count=256 if=/dev/zero | tr '\000' U > full_of_0x55

Cheers,
Steve


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Review Report: propose to include 3 new functions into intarray and intagg

No problem, I have time for clearing. But are these functions
guaranteed to be included in the contrib? If there is no guarantee,
seems the time of clearing will be wasted. (5 years ago I have already
cleaned one open-source library on demand and after that it was not
approved for PEAR repository, so now I am more circumspect, sorry :-).

So - is the approval procedure finished? If not, who could make the
final decision ("be or not to be")? Sorry, I don't yet know well the
patch proposal procedure...

P.S.
1. Unfortunately GROUP BY + ORDER BY is sometimes 1000 times (!)
slower than _int_group_count_sort. Because of that I have created this
function.
2. I have to assume that the input is sorted in functions, because
else the performance is lowered very much. Sort operation is quite
expensive; checking if an array is sorted or not is also quite
expensive... So I think it is not a good idea to add
sorting-independency to functions.
3. Seems the conversion of functions to anyarray/anyelement is much
more time-expensive than simply including it to specialized
intarray/intagg. Is it absolutely necessery?


On Mon, Sep 15, 2008 at 4:38 PM, Markus Wanner <markus@bluegap.ch> wrote:
> Hi,
>
> sorry for not having completed this review, yet. As you are obviously
> looking at the patch as well, I'll try to quickly write down my points so
> far.
>
> Trying to compile the intarray module, I now receive an error:
>
> error: 'INT4OID' undeclared (first use in this function)
>
> That can be solved by including "catalog/pg_type.h" from
> contrib/intarr/_int_op.c.
>
>
> The PG_FUNCTION_INFO_V1 and prototype definition certainly belong to the top
> of the file, where all others are.
>
> Some lines are longer than 80 columns and again comments are a bit sparse or
> even useless (no "additional things", please).
>
>
> Heikki Linnakangas wrote:
>>
>> I find it a bit unfriendly to have a function that depends on sorted
>> input, but doesn't check it. But that's probably not a good enough reason to
>> reject an otherwise simple and useful function. Also, we already have uniq,
>> which doesn't strictly speaking require sorted input, but it does if you
>> want to eliminate all duplicates from the array.
>
> I think it's a performance optimization which is absolutely required in some
> cases. Some time ago I've also had to rip out the sorting step from certain
> intarray module functions to save processing time.
>
> One option already mentioned somewhere would be saving a 'sorted' property
> for the array. Then again, I think such a thing would certainly have to be
> done globally, for all kinds of arrays.
>
>> _int_group_count_sort seems a bit special purpose. Why does it bother to
>> sort the output? That's wasted time if you don't need sorted output, or if
>> you want the array sorted by the integer value instead of frequency. If you
>> want sorted output, you can just sort it afterwards.
>
> Agreed. IMO the normal GROUP BY and ORDER BY stuff of the database itself
> should be used for such a thing. However, that means turning an array into a
> set of rows...
>
>> Also, it's requiring sorted input for a small performance gain, but
>> there's a lot more precedence in the existing intarray functions to not
>> require sorted input, but to sort the input instead (union, intersect, same,
>> overlap).
>
> ..and exactly these are the functions I had to wrap again to strip the
> sorting step, due to poor performance for known-sorted arrays.
>
>> I realize that the current implementation is faster for the use case where
>> the input is sorted, and output needs to be sorted, but if we go down that
>> path we'll soon have dozens of different variants of various functions, with
>> different ordering requirements of inputs and outputs.
>
> Agreed.
>
> However, given the OP is using that in production, there seems to be a use
> case for the optimization, where we have none for the same function without
> it.
>
>> So, I'd suggest changing _int_group_count_sort so that it doesn't require
>> sorted input, and doesn't sort the output. The binary search function looks
>> good to me (I think I'd prefer naming it bsearch(), though, though I can see
>> that it was named bidx in reference to the existing idx function). Also, as
>> Markus pointed out, the SGML docs need to be updated.
>
> As is, it should probably also carry the '_int' prefix, because it's not a
> general purpose array function. So propose to name it '_int_bsearch'.
>
> Overall I think these functions are overly specialized and should be
> replaced be more general counterparts in core. However, until we have that,
> it's hard to refuse such a thing for contrib.
>
> By that reasoning, however, the intarray would have to provide methods for
> sorted as well as asorted input arrays as well.
>
> I'm closing my part of reviewing of this patch now. Dmitry, how do you want
> to proceed with these patches? Do you have time for some cleaning up and
> writing documentation?
>
> Regards
>
> Markus Wanner
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] Strange highload on server

"Aras Angelo" <araskoktas@gmail.com> writes:
> We have been running our database server without any problem or reboot at
> all for a year. Yesterday we needed to change servers so we went ahead and
> installed the same postgresql version (8.2.1), restored the database files,
> ran some tests offline. Everything was good. As soon as we launch the site
> and the traffic hits the server, the load is going up to the sky, and the
> top command lists nothing but postgres processes. Strangely, even if i kill
> Apache and stop the traffic, somehow the load doesnt cool down, so i thought
> this might be an issue with postgres processes not timing out or something.
> But we use a very minimal config and the same config was being used in our
> other server without any problem.

I suspect you forgot to re-ANALYZE your tables after reloading all the
data.

BTW, running a production site on 8.2.1 borders on negligence. You're
missing more than a year and a half's worth of bug fixes, including at
least one serious risk of data loss.

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[GENERAL] Dynamically created cursors vanish in PLPgSQL

Hi all.

I'm running PGSQL v.8.3.3

I tried to adapt the examples from the friendly manual (38.7.3.5) in order to
to have a function to create cursors based on a parametric query string:

CREATE SEQUENCE s_cursors;

CREATE OR REPLACE FUNCTION f_cursor( query text, out curs refcursor )
LANGUAGE PLPGSQL STRICT
AS $BODY$
DECLARE
c refcursor;
BEGIN
c := 'cursor_'||nextval( 's_cursors' );
OPEN c SCROLL FOR EXECUTE query;
curs := c;
END;
$BODY$;

SELECT f_cursor( 'SELECT * FROM pg_tables' );

curs
-----------
cursor_1
(1 row)

FETCH 10 FROM cursor_1;

ERROR: cursor "cursor_1" does not exist

SELECT * from pg_cursors ;
name | statement | is_holdable | is_binary | is_scrollable | creation_time
------+-----------+-------------+-----------+---------------+---------------
(0 rows)

The cursor is (should have been) created as there's no error but it seems it
vanishes as soon as the creating function returns.
As if it was created "WITHOUT HOLD", which doesn't make much of sense in a
function returning a refcursor, this is why there is (seems to be) no "HOLD"
part in the cursor creation in PLPgSQL.

I think more likely I am making some mistake. But have n ìo idea where.

Any hint?

Thanks in advance

RMP.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] missing chunk number 0 for toast value

"BJ Taylor" <btaylor@propertysolutions.com> writes:
> Here are some recent logs from our system. Unfortunately, I didn't think to
> grab the logs at the time I killed those processes, and now they are gone.
> I found those processes by using ps, and then I killed them with a simple
> kill *processid*. Here are samples of our current log files:

Huh. That would have generated SIGTERM, which should at least mostly be
safe. We did recently fix a couple of problems with cleanup after
SIGTERM but they were both pretty low-probability risks. Anyway,
you've definitely got issues now:

> PANIC: right sibling's left-link doesn't match: block 175337 links to
> 243096 instead of expected 29675 in index "dbmail_headervalue_3"

Try reindexing that index, and any other ones that you see complaints
about.

> You also asked how it is being unstable. It drops connections seemingly at
> random. The error received when a connection is dropped is the following:

> WARNING: terminating connection because of crash of another server process

Well, that's just the fallout from a PANIC in some other session; if you
weren't watching the session that actually crashed, you need to look in
the postmaster log to see what went wrong.

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [BUGS] BUG #4437: Breaking referential integrity with a trigger

"Tim Leppard" <hbug_1@hotmail.com> writes:
> Returning NULL from a BEFORE DELETE trigger function on a referencing table
> using CASCADE allows you to break RI.

Yup, so don't do that ;-). Actually there are any number of ways to
break an RI constraint with poorly designed triggers. The only way
we could prevent it is to make RI updates not fire triggers, which seems
a cure worse than the disease.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[pgsql-www] Upgrading pgFoundry

Re-sent -hackers message:

On Sep 24, 2008, at 14:02, Joshua Drake wrote:

> I think one problem we have right now, is nobody knows what it is
> going
> to take. I would expect that our current version is sufficiently old
> enough to cause some migration pain?
>
> I know we have two members willing to help that are not Stefan and I.
> Which is good, but this doesn't appear to be a small project.

Does anyone know what needs to be done? If so, and you'd like to reply
with a list of tasks, I can put in a little time this week and maybe
next starting down that road. If it's a big job, I likely can't do it
all myself, but I'm certainly happy to help out!

Best,

David


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-es-ayuda] Backup o Restore de Vistas y Funciones

El día 25 de septiembre de 2008 11:52, Marcelino Guerrero (GMail)
<mguerreroh@gmail.com> escribió:
> Amigos,
>
> Una consulta necesito sacar Backup solo de Funciones y Vistas, es posible
> hacerlo?
>
> En caso lo anterior no se a posible, es posible hacer un restore solo de
> funciones y vistas?
>
> Utilizo postgresql-8.1 y postgresql-8.0
>
> Slds + Grcs
>
>
> --
> TIP 4: No hagas 'kill -9' a postmaster
>

que yo sepa se puede todo el esquema que incluyen las vistas y funciones,
adicionalmente otros objetos.

pg_dump <database> -s > <file>


--
Atentamente,
Moisés Alberto Lindo Gutarra
Asesor - Desarrollador Java / Open Source
Linux Registered User #431131 - http://counter.li.org/
Cel: (511) 995081720
MSN: mlindo@tumisolutions.com
--
TIP 4: No hagas 'kill -9' a postmaster