Tuesday, June 3, 2008

Re: [HACKERS] proposal: Preference SQL

Decibel! wrote:
> On May 29, 2008, at 6:08 PM, Jan Urbański wrote:
>> Preference SQL is an extension to regular SQL, that allows expressing
>> preferences in SQL queries. Preferences are like "soft" WHERE clauses.

> This seems like a subset of http://pgfoundry.org/projects/qbe/ ... or do
> I misunderstand?

I skimmed through the QBE howto, and I think it's actually far from it.
The thing that closely resembles preference clauses is the SKYLINE OF
operator, mentioned eariler in the thread - there is some archives
coverage on it.

I'm still working on producing a comparision of preference SQL and the
skyline operator, more to follow soon.

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-advocacy] International quality standards

Jaime Casanova wrote:
> Hi all,
>
> i'm trying to find out if there are any papers or study case that
> shows that postgres fullfills international standards of quality
>
> This is for a friend of mine that has to speech about it in a
> conference in Peru.

Hi Jamie,

You'll find that most quality standards, like ISO 9000/9001, CMM etc.,
apply to the organisation and processes that deliver a product or
service, rather than the product or service itself. You might need to
look at less well recognised areas like how bugs are handled, speed to
correct vulnerabilities, and that sort of thing.

Ciao
Fuzzy
:-)
------------------------------------------------
Dazed and confused about technology for 20 years
http://fuzzydata.wordpress.com/

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [pgsql-www] replace training blurb with upcoming pug meetings?

Robert Treat wrote:
On Wednesday 28 May 2008 18:11:19 Simon Riggs wrote:   
On Wed, 2008-05-28 at 10:58 -0700, Joshua D. Drake wrote:     
On Wed, 2008-05-28 at 19:47 +0200, Magnus Hagander wrote:       
Robert Treat wrote:         
In an effort to give more visibility to the pugs, one idea that was floated around was replacing the training blurb on the main site with an "upcoming pugs" section, modeled after the upcoming events.  The training blurb would then be changed to a direct link saying "looking for training?" (or similar) which would take you to the full training page.  There are some logistical issues that would need to be worked out to make this happen, but before we go down that path, I wanted to get a general consensus on the idea. thoughts?           
Any way we can have both? I agree it's a good idea to get upcoming pugs, but I'd like to keep the training info somehow.         
I think it is a matter of deciding the purpose of the space. IMO .Org should be all about community, not all about advertising for commercial providers. That isn't to say we shouldn't help the people who help us (obviously) but it is to say that a PUG should come before CMD, EDB or any other commercial entity listed.  That being said, we have a problem in that the front page is remarkably cluttered. It is trying to tell people entirely too much all at once. Something has to give if we are going to add PUGS. Training or Latest News seems the most appropriate.       
Yes, it is cluttered, so I think we should scroll down the page and make more space for ourselves.  The community includes commercial people too. And the commercial people only exist because they are wanted and needed.  The suggestion to have a list of PUGs over training could go the other way too. We could say "Want a PUG?". If they do, they'll click. That is of course fairly silly, but then so is "Want training?", which hides anything interesting and unusual and effectively kills it. If we do that I may as well pack up and just offer one course called "Training".      
 See, to me the current training blurb is already devoid enough of content that  I think it is only marginally better than a direct link pointing people  towards the full training.  Adding in a list of PUG meetings helps highlight  the growing regional presence and international communities for postgres  around the world.  I feel this used to be accomplished by the training  listing, but there was so much gamesmanship between the training companies  that we had to mold it into it's current, less than exciting, format. I don't  expect that gamesmanship from the PUGS.     
I'd have to whole-heartedly agree.  We saw a lot more interest in training when there were a few courses listed on the main page (those few being the ones that we would tend to get the most calls about).  With the shift to not showing any courses on the main page (due to event spam of sorts), the interest has really waned (though its difficult to determine if this is due to economic factors as opposed to page changing, I think it has to do a little with both).

I think a single link to training in a spot of its very own would provide a bit more visibility than being cluttered with some text and a link.  Perhaps we can have some list of the number of events as well?  Want training (22 events coming up!)?

--  Chander Ganesan Open Technology Group, Inc. One Copley Parkway, Suite 210 Morrisville, NC  27560 919-463-0999/877-258-8987 http://www.otg-nc.com

[COMMITTERS] pgsql: Remove unused variable (was already done in HEAD)

Log Message:
-----------
Remove unused variable (was already done in HEAD)

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pgsql/src/interfaces/ecpg/ecpglib:
prepare.c (r1.26.2.1 -> r1.26.2.2)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/ecpg/ecpglib/prepare.c?r1=1.26.2.1&r2=1.26.2.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Hint Bits and Write I/O

On May 27, 2008, at 2:35 PM, Simon Riggs wrote:
> After some discussions at PGCon, I'd like to make some proposals for
> hint bit setting with the aim to reduce write overhead.


For those that missed it... http://wiki.postgresql.org/wiki/Hint_Bits


(see archive reference at bottom).
--
Decibel!, aka Jim C. Nasby, Database Architect decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [SQL] Update problem

I tried getting the output of the execute statements by printing the
FOUND variable. It is returning the value as false.
However i used PEFORM instead of EXECUTE for the update statement. It

On 6/3/08, samantha mahindrakar <sam.mahindrakar@gmail.com> wrote:
> Hi....
> Iam facing a strange issue....
> One of the functions in my program is running an update statement. The
> statement is running cross-schema. What i mean is that the program
> resides in one schema where as it updates a table from another schema.
> How ever these scehmas are on the same database.
> The program runs correctly and also prints out the update statement.
> But it never actually updates the table.....neither does it fail.
> However when i run one of the update statements individually in the
> query tool...the update happens.
> Iam assuming that this is not a problem with the permissions either
> since the permission for the table to be updated is set to public.
> iam pasting the update statement for reference:
>
> EXECUTE 'UPDATE '||thepartition||' SET
> volume='||updated_volume||',occupancy='||updated_occ||',speed='||updated_speed||'
> WHERE lane_id='||lane||' and measurement_start =
> '''||measurement_start||'''';
>
>
> Thanks
> Sam
>

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] BUG #4204: COPY to table with FK has memory leak

On May 28, 2008, at 1:22 PM, Gregory Stark wrote:
> "Tom Lane" <tgl@sss.pgh.pa.us> writes:
>> "Tomasz Rybak" <bogomips@post.pl> writes:
>>> I tried to use COPY to import 27M rows to table:
>>> CREATE TABLE sputnik.ccc24 (
>>> station CHARACTER(4) NOT NULL REFERENCES
>>> sputnik.station24 (id),
>>> moment INTEGER NOT NULL,
>>> flags INTEGER NOT NULL
>>> ) INHERITS (sputnik.sputnik);
>>> COPY sputnik.ccc24(id, moment, station, strength, sequence, flags)
>>> FROM '/tmp/24c3' WITH DELIMITER AS ' ';
>>
>> This is expected to take lots of memory because each row-requiring-
>> check
>> generates an entry in the pending trigger event list. Even if you
>> had
>> not exhausted memory, the actual execution of the retail checks would
>> have taken an unreasonable amount of time. The recommended way to do
>> this sort of thing is to add the REFERENCES constraint *after* you
>> load
>> all the data; that'll be a lot faster in most cases because the
>> checks
>> are done "in bulk" using a JOIN rather than one-at-a-time.
>
> Hm, it occurs to me that we could still do a join against the
> pending event
> trigger list... I wonder how feasible it would be to store the
> pending trigger
> event list in a temporary table instead of in ram.


Related to that, I really wish that our statement-level triggers
provided NEW and OLD recordsets like some other databases do. That
would allow for RI triggers to be done on a per-statement basis, and
they could aggregate keys to be checked.
--
Decibel!, aka Jim C. Nasby, Database Architect decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [GENERAL] does postgresql works on distributed systems?

At 4:15p -0400 on Tue, 03 Jun 2008, Aravind Chandu wrote:
> Is postgresql similar to sql server or does it supports
> network sharing i,e one one can access postgresql from any system
> irrespective on which system it is installed.

Postgres is an open source project and similarly is not bound by the
same rules of business that Microsoft products are. Postgres has *no
limitation* on number of connections, short of what your system can
handle (network, memory, queries, disk, etc.).

> If there is any weblink for this kindly provide that also.
> Thank You,

http://www.postgresql.org/docs/current/static/runtime-config-connection.html

Should get you started.

Kevin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] does postgresql works on distributed systems?

Excuse me, but maybe I'm misunderstanding your statements and questions
here?

MS SQL Server most certainly 'can be' accessed from a network, three ways
immediately come to mind:
- isql command line
- osql command line
- PERL using DBI interface

ODBC Drivers help in some configuration scenarios, but there is no
question that MS SQL Server can be accessed from any network
configuration, suffice it to say there is no security mechanism denying
this access.

On your second point, postgresql, absolutely can be accessed as well over
the network!

On Tue, 3 Jun 2008, aravind chandu wrote:

> Hi,
>
>
>
>
>
> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; My question is
>
>
>
>
>
> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Microsoft sql server 2005
> cannot be shared on multiple systems i,e in a network environment when
> it is installed in one system it cannot be accessed one other
> systems.One can access only from a system where it is already installed
> but not on the system where there is no sqlserver.Is postgresql similar
> to sql server or does it supports network sharing i,e one one can
> access postgresql from any system irrespective on which system it is
> installed.
>
>
>
>
>
> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; If there is any
> weblink for this kindly provide that also.
>
>
> &nbsp;&nbsp;&nbsp; Thank You,
>
>
> &nbsp; &nbsp;&nbsp; Avinash &nbsp;&nbsp;&nbsp;
>
>
>
>
>

--
Louis Gonzales
louis.gonzales@linuxlouis.net
http://www.linuxlouis.net


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] E_PARSE error ?

Hi,
I think, this is the wrong list, it appears to be a PHP error.

Anyway, try to put the global $_SERVER['SCRIPT_NAME'] into {}brackets:

list($page_id)=sqlget("select page_id from pages where
name='{$_SERVER['SCRIPT_NAME']}'");

Hope, You're not lost anymore ...
Ludwig

PJ schrieb:
> I'm using php5, postgresql 8.3, apache2.2.8, FreeBSD 7.0
> I don't understand the message:
>
> *Parse error*: syntax error, unexpected T_ENCAPSED_AND_WHITESPACE,
> expecting T_STRING or T_VARIABLE or T_NUM_STRING
>
> the guilty line is:
>
> list($page_id)=sqlget("
> select page_id from pages where name='$_SERVER['SCRIPT_NAME']'");
>
> the variable value is "/index.php"
>
> however, at the time of execution this has been cleared
>
> So, the question is - What is the unexpected T_ENCAPSED_AND_WHITESPACE?
> and What is actually expected? Are we talking about the content of
> $_SERVER['SCRIPT_NAME'] or what is the syntax error? This is within
> php code; could it be that the parser is reading this as something
> else, like HTML?
> I'm lost :((
>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Forcing Postgres to Execute a Specific Plan

All paths of optimizer are just in function "standard_planner", which mainly calls "subquery_planner", which just takes the rewrited structure "Query" as the main parameter. But system provides another way if you wannt to write your own optimizer, that is: define the global var "planner_hook" to your own optimizer function (please refer to function "planner"). So this is one of the way prevents the system takes it own optimizer routine.

If you want to modify the plan returned by the optimizer, you can add some code just in the function "planner", i.e., takes result as the param of your routine.

Any way, It is needed that you get very familiar with the structure of "PlannedStmt".


**********************************************************************
2008/6/3 John Cieslewicz <johnc@cs.columbia.edu>:
I completely understand that what I am proposing is somewhat mad and I didn't expect it to be easy.

Basically, I'm doing some research on a new operator and would like to start testing it by inserting it into a very specific place in very specific plans without having to do too much work in plan generation or optimization. I think that I could do this by writing some code to inspect a plan and swap out the piece that I care about. I realize this is a hack, but at the moment it's just for research purposes. Though I have worked with the internals of other db systems, I'm still getting familiar with postgres. Could such a piece of code be placed in the optimizer just before it returns an optimized plan or can a plan be modified after it is returned by the optimizer?

John Cieslewicz.

Re: [HACKERS] Case-Insensitve Text Comparison

David E. Wheeler napsal(a):
> On Jun 3, 2008, at 12:06, Zdenek Kotala wrote:
>
>> It is simple. SQL standard does not specify notation for that (chapter
>> 11.34). But there is proposed notation:
>>
>> CREATE COLLATION <collation name> FOR <character set specification>
>> FROM <existing collation name> [ <pad characteristic> ] [ <case
>> sensitive> ] [ <accent sensitive> ] [ LC_COLLATE <lc_collate> ] [
>> LC_CTYPE <lc_ctype> ]
>>
>> <pad characteristic> := NO PAD | PAD SPACE
>> <case sensitive> := CASE SENSITIVE | CASE INSENSITIVE
>> <accent sensitive> := ACCENT SENSITIVE | ACCENT INSENSITIVE
>>
>>
>> You can specify for each collation if it is case sensitive or not and
>> collation function should be responsible to correctly handle this flag.
>
> Wooo! Now if only i could apply that on a per-column basis. Still, it'll
> be great to have this for a whole database.

The first step is per database, because it is relative easy. Collation
per-column is very difficult. It requires a lot of changes (parser, planer,
executor...) in whole source code, because you need to keep collation
information together with text data.

It is reason why this task is split to severals part.

Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Case-Insensitve Text Comparison

David E. Wheeler napsal(a):
> On Jun 3, 2008, at 02:27, Zdenek Kotala wrote:
>
>> The proposal of GSoc is there:
>> http://archives.postgresql.org/pgsql-hackers/2008-05/msg00857.php
>>
>> It should create basic framework for full SQL COLLATION support. All
>> comments are welcome.
>
> That looks great, Zdenek. I'm very excited to have improved SQL
> COLLATION support in core. But if I could ask a dumb question, how would
> I specify a case-insensitive collation? Or maybe the Unicode Collation
> Algorithm is case-insensitive or has case-insensitive support?

It is simple. SQL standard does not specify notation for that (chapter 11.34).
But there is proposed notation:

CREATE COLLATION <collation name> FOR <character set specification> FROM
<existing collation name> [ <pad characteristic> ] [ <case sensitive> ] [
<accent sensitive> ] [ LC_COLLATE <lc_collate> ] [ LC_CTYPE <lc_ctype> ]

<pad characteristic> := NO PAD | PAD SPACE
<case sensitive> := CASE SENSITIVE | CASE INSENSITIVE
<accent sensitive> := ACCENT SENSITIVE | ACCENT INSENSITIVE


You can specify for each collation if it is case sensitive or not and collation
function should be responsible to correctly handle this flag.


Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Re: PostgreSQL 8.3 XML parser seems not to recognize the DOCTYPE element in XML files

Bruce Momjian wrote:
> Added to TODO:
>
> * Allow XML to accept more liberal DOCTYPE specifications

Is any form of DOCTYPE accepted?

We're getting errors on a second line in an XML document that
starts like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE DOT_OFFICER_CITATION SYSTEM "http://host.domain/dtd/dotdisposition0_02.dtd">

The actual host.domain value is resolved by DNS,
and wget of the url works on the server running PostgreSQL.
Attempts to cast the document to type xml give:

ERROR: invalid XML content
DETAIL: Entity: line 2: parser error : StartTag: invalid element name
<!DOCTYPE DOT_OFFICER_CITATION SYSTEM "http://host.domain/dtd/dot
^

It would be nice to use the xml type, but we always have DOCTYPE.
I understand that PostgreSQL won't validate against the specified
DOCTYPE, but it shouldn't error out on it, either.

-Kevin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Re: PostgreSQL 8.3 XML parser seems not to recognize the DOCTYPE element in XML files

Bruce Momjian wrote:
> Added to TODO:
>
> * Allow XML to accept more liberal DOCTYPE specifications

Is any form of DOCTYPE accepted?

We're getting errors on the second line like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE DOT_OFFICER_CITATION SYSTEM http://host.domain/dtd/dotdisposition0_02.dtd">

The actual host.domain value is resolved by DNS,
and wget of the url works on the machine.
Attempts to cast the document to type xml give:

ERROR: invalid XML content
DETAIL: Entity: line 2: parser error : StartTag: invalid element name
<!DOCTYPE DOT_OFFICER_CITATION SYSTEM "http://host.domain/dtd/dot
^

It would be nice to use the xml type, but we always have DOCTYPE....

-Kevin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgus-board] bylaws: make email notification of meetings the default, members special request notification by mail

On Tue, Jun 3, 2008 at 9:47 AM, Joshua D. Drake <jd@commandprompt.com> wrote:
>
>
> On Fri, 2008-05-30 at 14:47 -0400, Michael Alan Brewer wrote:
>> No objections here. Joshua?
>>
>
> So I sent the reply from the Attorney. What are our thoughts?

I think we should make email notification of meetings the default :)

Risk seems minimal.

-selena

--
Selena Deckelmann
United States PostgreSQL Association - http://www.postgresql.us
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [HACKERS] phrase search

> This is far more complicated than I thought.
>> Of course, phrase search should be able to use indexes.
> I can probably look into how to use index. Any pointers on this?

src/backend/utils/adt/tsginidx.c, if you invent operation # in tsquery then you
will have index support with minimal effort.
>
> Yes this is exactly how I am using in my application. Do you think this
> will solve a lot of common case or we should try to get phrase search

Yeah, it solves a lot of useful case, for simple use it's needed to invent
function similar to existsing plaitnto_tsquery, say phraseto_tsquery. It should
produce correct tsquery with described above operations.

--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgus-board] bylaws: make email notification of meetings the default, members special request notification by mail

It's kind of funny; my first thought (to the "have all board members
sign a consent form allowing for email notification" comment) was, "I
wonder if we'd accept the form via email..." ;)

While I'm okay with signing a form (and making the signing of said
form board policy, if not encapsulated in the bylaws), I wasn't clear
on the lawyer's response (wrt "risk"); what are the specific "risks"
involved with the aforementioned change?

---Michael Brewer
mbrewer@gmail.com

--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

[GENERAL] bloom filter indexes?

I've been working on partitioning a rather large dataset into multiple
tables. One limitation I've run into the lack of cross-partition-table
unique indexes. In my case I need to guarantee the uniqueness of a
two-column pair across all partitions -- and this value is not used to
partition the tables. The table is partitioned based on a insert date
timestamp.

To check the uniqueness of this value I've added an insert/update
trigger to search for matches in the other partitions. This trigger is
adding significant overhead to inserts and updates.

This sort of 'membership test' where I need only need to know if the
key exists in the table is a perfect match for bloom filter. (see:
http://en.wikipedia.org/wiki/Bloom_filter).

The Bloom filter can give false positives so using it alone won't
provide the uniqueness check I need, but it should greatly speed up
this process.

Searching around for "postgresql bloom filter" I found this message
from 2005 along the same lines:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg01475.php

This thread indicates bloom filters are used in the intarray contrib
module and the tsearch2 (and I assume the built-in 8.3 full-text
search features).

I also found this assignment for CS course at the University of
Toronto, when entails using bloom filters to speed up large joins:
http://queens.db.toronto.edu/~koudas/courses/cscd43/hw2.pdf

So, my question: are there any general-purpose bloom filter
implementations for postgresql? I'm particularly interested
implementations that would be useful for partitioned tables. Is anyone
working on something like this?

thanks,
- Mason

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] DISTINCT -> GROUP BY

2008/6/3 David Fetter <david@fetter.org>:
> On Tue, Jun 03, 2008 at 03:36:44PM +0200, Pavel Stehule wrote:
>> Hello David
>>
>> http://www.postgresql.org/docs/faqs.TODO.html
>>
>> Consider using hash buckets to do DISTINCT, rather than sorting This
>> would be beneficial when there are few distinct values. This is
>> already used by GROUP BY.
>
> It's nice to see that this is kinda on the TODO, but it doesn't
> address the question I asked, which is, "how would I get the planner
> to rewrite DISTINCTs as the equivalent GROUP BYs?" :)

you can't to do it :(

Pavel

>
> Any hints?
>
> Cheers,
> David.
> --
> David Fetter <david@fetter.org> http://fetter.org/
> Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
> Skype: davidfetter XMPP: david.fetter@gmail.com
>
> Remember to vote!
> Consider donating to Postgres: http://www.postgresql.org/about/donate
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] cross-database references are not implemented

Hello

it works for me

postgres=# create schema export;
CREATE SCHEMA
Time: 45,918 ms
postgres=# create table public.a(a varchar);
CREATE TABLE
Time: 91,385 ms
postgres=# create table export.a(a varchar);
\CREATE TABLE
Time: 9,462 ms
postgres=# create function ftrg() returns trigger as $$begin insert
into export.a values(new.*); return new; end$$ language plpgsql;
CREATE FUNCTION
Time: 486,395 ms
postgres=# \h CREATE trigger
Command: CREATE TRIGGER
Description: define a new trigger
Syntax:
CREATE TRIGGER name { BEFORE | AFTER } { event [ OR ... ] }
ON table [ FOR [ EACH ] { ROW | STATEMENT } ]
EXECUTE PROCEDURE funcname ( arguments )

postgres=# CREATE TRIGGER aaa after insert on public.a for each row
execute procedure ftrg();
CREATE TRIGGER
Time: 5,848 ms
postgres=# insert into public.a values('ahoj');
INSERT 0 1
Time: 5,179 ms
postgres=# SELECT * from export.a ;
a
------
ahoj
(1 row)

postgresql 8.3

Pavel

2008/6/3 Paul Dam <p.dam@amyyon.nl>:
> Hoi,
>
>
>
> I have a database with 2 schemas:
>
> - public
>
> - export
>
>
>
> In the export schema I have tables that are filled during an export process.
>
> There is some data I want to have in a table in the public schema as well.
>
> I wrote a trigger function that after insert in the export table does an
> export in the public table.
>
>
>
> If I do an insert I get the error message: "ERROR: cross-database
> references are not implemented".
>
>
>
> How can I solve this?
>
>
>
> Met vriendelijke groet,
>
>
>
> Paul Dam

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[GENERAL] Strange statistics

Hi list,

I'm having a table with a lots of file names in it. (Aprox 3 million)
in a 8.3.1 db.

Doing this simple query shows that the statistics is way of but I can
get them right even when I raise the statistics to 1000.

db=# alter table tbl_file alter file_name set statistics 1000;
ALTER TABLE
db=# analyze tbl_file;
ANALYZE
db=# explain analyze select * from tbl_file where lower(file_name)
like lower('to%');
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on tbl_file (cost=23.18..2325.13 rows=625
width=134) (actual time=7.938..82.386 rows=17553 loops=1)
Filter: (lower((file_name)::text) ~~ 'to%'::text)
-> Bitmap Index Scan on tbl_file_idx (cost=0.00..23.02 rows=625
width=0) (actual time=6.408..6.408 rows=17553 loops=1)
Index Cond: ((lower((file_name)::text) ~>=~ 'to'::text) AND
(lower((file_name)::text) ~<~ 'tp'::text))
Total runtime: 86.230 ms
(5 rows)


How can it be off by a magnitude of 28??

Cheers,
Henke

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-fr-generale] Problème de select suivant un update

Bonjour,

Il y a quelques temps j'ai posté un mail sur un problème d'update mais
je me m'étais ensuite rendu compte d'une erreur dans nos tests.
Maintenant plus d'erreur, mais nous observons toujours un comportement
curieux -reproductible- d'une série d'update puis select sur une base
PG. Je suis preneur d'explication si vous en avez ...

Voilà : il s'agit d'une base PG 8.3.1 sur serveur linux RedHat 5.1 64
bit avec 4 Go de RAM.

On travaille sur une table de test contenant 5 millions de lignes. On
effectue un update portant sur environ 70000 lignes (avec utilisation
d'index). Lorsque on effectue ensuite un select portant sur les lignes
mises à jour (avec même condition donc on ne retournera aucune ligne)
le temps d'exécution de ce select est très long alors que on
s'attendrait à un retour immédiat.

Pourquoi ???

Ci-dessous les résultats des explain analyze update, select etc :

difmet=> \d test_update
Table « public.test_update »
Colonne | Type | Modificateurs
------------+-----------------------------+---------------
id | bigint | not null
state | integer | not null
state_date | timestamp without time zone | not null
priority | integer | not null
channel | character varying(30) | not null
clef | character varying(2048) | not null
data1 | character varying |
data2 | character varying |
data3 | character varying |
data4 | character varying |
Index :
« pk_test_update_id » PRIMARY KEY, btree (id)
« indx_test_update_channel » btree (state, channel
varchar_pattern_ops, clef varchar_pattern_ops, priority, state_date)
Contraintes de vérification :
« ck_test_update_id » CHECK (id > 0)
« ck_test_update_state » CHECK (state > 0)

difmet=>

difmet=> \timing
Chronométrage activé.


difmet=> \!date
mer mai 21 10:58:20 GMT 2008


difmet=> explain analyze update test_update set state = 3001 where
state
= 2101 and channel like 'FTP' and clef like
'ddbddbddbddbddbddbddbddbddbddbddbddbddbddbddb' ;

QUERY PLAN
--------------------------------------------------------------------------------------------
Index Scan using indx_test_update_channel on test_update
(cost=0.00..129127.15 rows=69744 width=572) (actual
time=22.140..327988.689 rows=71000 loops=1)
Index Cond: ((state = 2101) AND ((channel)::text ~=~ 'FTP'::text)
AND ((clef)::text ~=~
'ddbddbddbddbddbddbddbddbddbddbddbddbddbddbddb'::text))
Filter: (((channel)::text ~~ 'FTP'::text) AND ((clef)::text ~~
'ddbddbddbddbddbddbddbddbddbddbddbddbddbddbddb'::text))
Total runtime: 697914.820 ms
(4 lignes)

Temps : 698885,960 ms


difmet=> \!date
mer mai 21 11:09:59 GMT 2008


difmet=> explain analyse select id,priority,state_date from test_update
where state = 2101 and channel like 'FTP' and clef like
'ddbddbddbddbddbddbddbddbddbddbddbddbddbddbddb' LIMIT 10;

QUERY PLAN
----------------------------------------------------------------------------------------
Limit (cost=0.00..18.52 rows=10 width=20) (actual
time=258036.859..258036.859 rows=0 loops=1)
-> Index Scan using indx_test_update_channel on test_update
(cost=0.00..130964.52 rows=70734 width=20) (actual
time=258036.854..258036.854 rows=0 loops=1)
Index Cond: ((state = 2101) AND ((channel)::text ~=~
'FTP'::text) AND ((clef)::text ~=~
'ddbddbddbddbddbddbddbddbddbddbddbddbddbddbddb'::text))
Filter: (((channel)::text ~~ 'FTP'::text) AND ((clef)::text
~~
'ddbddbddbddbddbddbddbddbddbddbddbddbddbddbddb'::text))
Total runtime: 258036.916 ms
(5 lignes)

Temps : 262332,196 ms


difmet=> \!date
mer mai 21 11:14:22 GMT 2008


On voit bien que le select suivant l'update passe par l'index, que le
nombre de lignes concernées est 0, mais pourquoi le temps d'exécution de
ce select n'est-il pas quasi-nul ?

On a aussi fait le même test sur une version postgres 8.2, même
comportement.

Merci de votre aide, Valérie.

--

********************************************************************
* Les points de vue exprimes sont strictement personnels et *
* n'engagent pas la responsabilite de METEO-FRANCE. *
********************************************************************
* Valerie SCHNEIDER Tel : +33 (0)5 61 07 81 91 *
* METEO-FRANCE / DSI/DEV Fax : +33 (0)5 61 07 81 09 *
* 42, avenue G. Coriolis Email : Valerie.Schneider@meteo.fr *
* 31057 TOULOUSE Cedex 1 - FRANCE

http://www.meteo.fr

*
********************************************************************


--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [HACKERS] DISTINCT -> GROUP BY

Hello David

http://www.postgresql.org/docs/faqs.TODO.html

Consider using hash buckets to do DISTINCT, rather than sorting
This would be beneficial when there are few distinct values. This is
already used by GROUP BY.

Regards
Pavel Stehule

2008/6/3 David Fetter <david@fetter.org>:
> Folks,
>
> I've noticed that queries of the form
>
> SELECT DISTNCT foo, bar, baz
> FROM quux
> WHERE ...
>
> perform significantly worse than the equivalent using GROUP BY.
>
> SELECT foo, bar, baz
> FROM quux
> WHERE ...
> GROUP BY foo, bar, baz
>
> Where would I start looking in order to make them actually equivalent
> from the planner's point of view? Also, would back-patching this make
> sense? It doesn't change any APIs, but it does make some queries go
> faster.
>
> Cheers,
> David.
> --
> David Fetter <david@fetter.org> http://fetter.org/
> Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
> Skype: davidfetter XMPP: david.fetter@gmail.com
>
> Remember to vote!
> Consider donating to Postgres: http://www.postgresql.org/about/donate
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] function returning result set of varying column

On Tue, 3 Jun 2008 09:01:02 -0400
"maria s" <psmg01@gmail.com> wrote:

> Hi Friends,
> Thanks for all your for the reply.
>
> I tried the function and when I execute it using
> select * from myfunction()
> it says
> ERROR: a column definition list is required for functions
> returning "record"
>
> Could you please help me to fix this error?
>
> Thanks so much for your help.

you can specify the returned types in each statement that call your
function or you can specify the returned type in the function itself.

CREATE OR REPLACE FUNCTION myfunction(out col1 int, out col2
varchar(32), out ...)
RETURNS
SETOF
RECORD
AS
$body$
DECLARE
rec record;
BEGIN
FOR rec IN (
SELECT * FROM sometable)
LOOP
col1:=rec.col1;
col2:=rec.col2;
-- col3:=...;
RETURN NEXT;
END LOOP;
RETURN;
END;
$body$

> > CREATE OR REPLACE FUNCTION myfunction() RETURNS SETOF RECORD AS
> > $body$
> > DECLARE
> > rec record;
> > BEGIN
> > FOR rec IN (
> > SELECT * FROM sometable)
> > LOOP
> > RETURN NEXT rec;
> > END LOOP;
> > RETURN;
> > END;
> > $body$

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [GENERAL] FW: make rows unique across db's without UUIP on windows?

In response to "Kimball Johnson" <kjohnson@voiceandnetworksystems.com>:
>
> What is the normal solution in pgsql-land for making a serious number of
> rows unique across multiple databases?
>
>
>
> I mean particularly databases of different types (every type) used at
> various places (everywhere) on all platforms (even MS[TM])? You know. a
> UNIVERSAL id?

Just give each separate system it's own unique identifier and a sequence
to append to it.

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgloader - pgloader: 2.3.1 release

Log Message:
-----------
2.3.1 release

Modified Files:
--------------
pgloader:
TODO.txt (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgloader/pgloader/TODO.txt.diff?r1=1.5&r2=1.6)
pgloader/debian:
changelog (r1.25 -> r1.26)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgloader/pgloader/debian/changelog.diff?r1=1.25&r2=1.26)
pgloader/pgloader:
options.py (r1.22 -> r1.23)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgloader/pgloader/pgloader/options.py.diff?r1=1.22&r2=1.23)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] turning fsync off for WAL


Ahh. I think you can use this effectively but not the way you're describing.

Instead of writing the wal directly to persistentFS what I think you're better
off doing is treating persistentFS as your backup storage. Use "Archiving" as
described here to archive the WAL files to persistentFS:

http://postgresql.com.cn/docs/8.3/static/runtime-config-wal.html#GUC-ARCHIVE-MODE

Looks like this is the best solution.

Thanks,

Ram

Re: [NOVICE] EXPLAIN output explanation requested

am Tue, dem 03.06.2008, um 12:35:34 +0200 mailte A. Kretschmer folgendes:
> I guess, you have much insert/delete or update - operations on this
> table and no recent vacuum.
>
> Try to run a 'vacuum full;' and re-run your query. And, run a 'explain
> analyse <your query>' to see the estimated costs and the real costs.

Btw, run 'select relpages, reltuples from pg_class where relname='phone';
before and after the 'vacuum full' and show us the result.


Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA

http://wwwkeys.de.pgp.net

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

[pgadmin-hackers] SVN Commit by dpage: r7349 - tags/REL-1_8_4-EDB/pgadmin3

Author: dpage

Date: 2008-06-03 11:43:44 +0100 (Tue, 03 Jun 2008)

New Revision: 7349

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7349&view=rev

Log:
Ensure we honour $DESTDIR, per Devrim

Modified:
tags/REL-1_8_4-EDB/pgadmin3/Makefile.am

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgsql-es-ayuda] Qué es el QUERY BUFFER

Cordial Saludos compañeros.
 
No he encontrado en la documentación a qué se refiere el QUERY BUFFER, ya que utilizando los comandos (psql) "\p" lo consulto y con "\r" lo reseteo. Pero no me queda claro si en este buffer se guarda la consulta original(sentencia), su parsing o el resultado de la consulta (datos) y cuando se usa esta buffer. Además al darle "\p" sólo me aparece la última consulta ... podría tener más? cómo configuro cuantas?.
 
Atentamente,
 
RAUL DUQUE
Bogotá, Colombia

[NOVICE] EXPLAIN output explanation requested

Hello all,

I have an EXPLAIN statement that gives me output I understand,
but on the other hand I don't...

tium=# explain select codec1, phonetype from phone;
QUERY PLAN
------------------------------------------------------------
Seq Scan on phone (cost=0.00..85882.58 rows=658 width=11)
(1 row)


This is a table with 658 rows. Queries are indeed very
slow. How is the query plan computed? What does the 85882 value
mean?

Thanks,
Ron


--
NeoNova BV, The Netherlands
Professional internet and VoIP solutions

http://www.neonova.nl

Kruislaan 419 1098 VA Amsterdam
info: 020-5628292 servicedesk: 020-5628292 fax: 020-5628291
KvK Amsterdam 34151241

The following disclaimer applies to this email:
http://www.neonova.nl/maildisclaimer

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [HACKERS] Case-Insensitve Text Comparison

Martijn van Oosterhout napsal(a):
> On Mon, Jun 02, 2008 at 11:08:55AM -0700, Jeff Davis wrote:
>> http://wiki.postgresql.org/wiki/Todo:Collate
>>
>> The last reference I see on that page is from 2005. Is there any updated
>> information? Are there any major obstacles holding this up aside from
>> the platform issues mentioned on that page?
>
> Well, a review of the patch and a bit of work in the optimiser.
> However, I think the patch will have bitrotted beyond any use by now.
> It touched many of the areas the operator families stuff touched, for
> example.
>
> I beleive it is being reimplemented as a GSoc project, that's probably
> a better approach. Should probably just delete the page from the wiki
> altogether.

The proposal of GSoc is there:
http://archives.postgresql.org/pgsql-hackers/2008-05/msg00857.php

It should create basic framework for full SQL COLLATION support. All comments
are welcome.

Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] query performance question

Hi,

Hubert already answered your question - it's expected behavior, the
count(*) has to read all the tuples from the table (even dead ones!). So
if you have a really huge table, it will take a long time to read it.

There are several ways to speed it up - some of them are simple (but the
speedup is limited), some of them require change of application logic and
requires to rewrite part of the application (using triggers to count the
rows, etc.)

1) If the transactions have sequential ID without gaps, you may easily
select MAX(id) and that'll give the count. This won't work if some of the
transactions were deleted or if you need to use other filtering criteria.
The needed changes in the application are quite small (basically just a
single SQL query).

2) Move the table to a separate tablespace (a separate disk if possible).
This will speed up the reads, as the table will be 'compact'. This is just
a db change, it does not require change in the application logic. This
will give you some speedup, but not as good as 1) or 3).

3) Build a table with totals or maybe subtotals, updated by triggers. This
requires serious changes in application as well as in database, but solves
issues of 1) and may give you even better results.

Tomas

> Hello,
>
> I have a table (transactions) containing 61 414 503 rows. The basic
> count query (select count(transid) from transactions) takes 138226
> milliseconds.
> This is the query analysis output:
>
> Aggregate (cost=2523970.79..2523970.80 rows=1 width=8) (actual
> time=268964.088..268964.090 rows=1 loops=1);
> -> Seq Scan on transactions (cost=0.00..2370433.43 rows=61414943
> width=8) (actual time=13.886..151776.860 rows=61414503 loops=1);
> Total runtime: 268973.248 ms;
>
> Query has several indexes defined, including one on transid column:
>
> non-unique;index-qualifier;index-name;type;ordinal-position;column-name;asc-or-desc;cardinality;pages;filter-condition
>
> f;<null>;transactions_id_key;3;1;transid;<null>;61414488;168877;<null>;
> t;<null>;trans_ip_address_index;3;1;ip_address;<null>;61414488;168598;<null>;
> t;<null>;trans_member_id_index;3;1;member_id;<null>;61414488;169058;<null>;
> t;<null>;trans_payment_id_index;3;1;payment_id;<null>;61414488;168998;<null>;
> t;<null>;trans_status_index;3;1;status;<null>;61414488;169005;<null>;
> t;<null>;transactions__time_idx;3;1;time;<null>;61414488;168877;<null>;
> t;<null>;transactions_offer_id_idx;3;1;offer_id;<null>;61414488;169017;<null>;
>
> I'm not a dba so I'm not sure if the time it takes to execute this query
> is OK or not, it just seems a bit long to me.
> I'd appreciate it if someone could share his/her thoughts on this. Is
> there a way to make this table/query perform better?
> Any query I'm running that joins with transactions table takes forever
> to complete, but maybe this is normal for a table this size.
> Regards,
>
> Marcin
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] query performance question

begin:vcard
fn:Marcin Citowicki
n:Citowicki;Marcin
email;internet:marcin.citowicki@m4n.nl
x-mozilla-html:FALSE
version:2.1
end:vcard

Hello Hubert,

Thank you for your reply. I don't really need to count rows in transactions table, I just thought this was a good example to show how slow the query was.
But based on what you wrote it looks like count(*) is slow in general, so this seems to be OK since the table is rather large.
I just ran other queries (joining transactions table) and they returned quickly, which leads me to believe that there could be a problem not with the database, but with the box
the db is running on. Sometimes those same queries take forever and now they complete in no time at all, so perhaps there is a process that is running periodically which is slowing the db down.
I'll need to take a look at this.
Thank you for your help!

Marcin


hubert depesz lubaczewski wrote:
On Tue, Jun 03, 2008 at 09:57:15AM +0200, Marcin Citowicki wrote:   
I'm not a dba so I'm not sure if the time it takes to execute this query  is OK or not, it just  seems a bit long to me.     
 This is perfectly OK. count(*) from table is generally slow. There are some ways to make it faster (depending if you need exact count, or some estimate).    
I'd appreciate it if someone could share his/her thoughts on this. Is  there a way to make this table/query perform better?     
 You can keep the count of elements in this table in separate table, and update it with triggers.    
Any query I'm running that joins with transactions table takes forever  to complete, but maybe this is normal for a table this size.     
 As for other queries - show them, and their explain analyze.  Performance of count(*) is dependent basically only on size of table. In case of other queries - it might be simple to optimize them. Or impossible - without knowing the queries it's impossible to tell.  Do you really care about count(*) from 60m+ record table? How often do you count the records?  Best regards,  depesz    

Re: [HACKERS] Add dblink function to check if a named connection exists

Joe Conway wrote:
> Tom Lane wrote:
>> Tommy Gildseth <tommy.gildseth@usit.uio.no> writes:
>>> One obvious disadvantage of this approach, is that I need to connect
>>> and disconnect in every function. A possible solution to this, would
>>> be having a function f.ex dblink_exists('connection_name') that
>>> returns true/false depending on whether the connection already exists.
>>
>> Can't you do this already?
>>
>> SELECT 'myconn' = ANY (dblink_get_connections());
>>
>> A dedicated function might be a tad faster, but it probably isn't going
>> to matter compared to the overhead of sending a remote query.
>
> I agree. The above is about as simple as
> SELECT dblink_exists('dtest1');
> and probably not measurably slower. If you still think a dedicated
> function is needed, please send the output of some performance testing
> to justify it.
>
> If you really want the notational simplicity, you could use an SQL
> function to wrap it:
>
> CREATE OR REPLACE FUNCTION dblink_exists(text)
> RETURNS bool AS $$
> SELECT $1 = ANY (dblink_get_connections())
> $$ LANGUAGE sql;

dblink_get_connections() returns null if there are no connections
though, so the above will fail if you haven't already established a
connection, unless you also check for null, and not just false.

I guess you could rewrite the above function to something like:

CREATE OR REPLACE FUNCTION dblink_exists(text)
RETURNS bool AS $$
SELECT COALESCE($1 = ANY (dblink_get_connections()), false)
$$ LANGUAGE sql;

--
Tommy Gildseth


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgadmin-hackers] SVN Commit by dpage: r7342 - tags

Author: dpage

Date: 2008-06-03 09:52:04 +0100 (Tue, 03 Jun 2008)

New Revision: 7342

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7342&view=rev

Log:
Created folder remotely


Added:
tags/REL-1_8_3-EDB/

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] SVN Commit by dpage: r7341 - branches/REL-1_8_0_PATCHES/pgadmin3/pgadmin/include

Author: dpage

Date: 2008-06-03 09:44:56 +0100 (Tue, 03 Jun 2008)

New Revision: 7341

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7341&view=rev

Log:
Post-release version bump


Modified:
branches/REL-1_8_0_PATCHES/pgadmin3/pgadmin/include/version.h

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] SVN Commit by dpage: r7339 - tags

Author: dpage

Date: 2008-06-03 09:40:20 +0100 (Tue, 03 Jun 2008)

New Revision: 7339

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7339&view=rev

Log:
Created folder remotely


Added:
tags/REL-1_8_3/

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] SVN Commit by dpage: r7336 - branches/REL-1_8_0_PATCHES/pgadmin3/pgadmin/include

Author: dpage

Date: 2008-06-03 08:56:01 +0100 (Tue, 03 Jun 2008)

New Revision: 7336

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7336&view=rev

Log:
Bump version number for release

Modified:
branches/REL-1_8_0_PATCHES/pgadmin3/pgadmin/include/version.h

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[ADMIN] How add db users from shell script with their passwords?

Dear community, help please.

I need to add some standard users to database together with their
standard passwords from a shell script. That the script would not ask
to enter passwords manually for each user. How it can be made?

How I have understood, createuser comand does not allow to make it?

In advance thanks.

--
--
С уважением, Илья Скорик
специалист
Inprint - автоматизация вашего издательства

Yours faithfully, Ilya Skorik
the expert
Inprint - automation of your publishing house

e-mail: ilya.skorik@inprint.ru
web: http://www.inprint.ru/

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Core team statement on replication in PostgreSQL

On Mon, 2008-06-02 at 22:40 +0200, Andreas 'ads' Scherbaum wrote:
> On Mon, 02 Jun 2008 11:52:05 -0400 Chris Browne wrote:
>
> > adsmail@wars-nicht.de ("Andreas 'ads' Scherbaum") writes:
> > > On Thu, 29 May 2008 23:02:56 -0400 Andrew Dunstan wrote:
> > >
> > >> Well, yes, but you do know about archive_timeout, right? No need to wait
> > >> 2 hours.
> > >
> > > Then you ship 16 MB binary stuff every 30 second or every minute but
> > > you only have some kbyte real data in the logfile. This must be taken
> > > into account, especially if you ship the logfile over the internet
> > > (means: no high-speed connection, maybe even pay-per-traffic) to the
> > > slave.
> >
> > If you have that kind of scenario, then you have painted yourself into
> > a corner, and there isn't anything that can be done to extract you
> > from it.
>
> You are misunderstanding something. It's perfectly possible that you
> have a low-traffic database with changes every now and then. But you
> have to copy a full 16 MB logfile every 30 seconds or every minute just
> to have the slave up-to-date.

To repeat my other post in this thread:

Actually we can already do better than file-by-file by using
pg_xlogfile_name_offset() which was added sometime in 2006. walmgr.py
from SkyTools package for example does this to get no more than a few
seconds failure window and it copies just the changed part of WAL to
slave.

pg_xlogfile_name_offset() was added just for this purpose - to enable
WAL shipping scripts to query, where inside the logfile current write
pointer is.

It is not synchronous, but it can be made very close, within subsecond
if you poll it frequently enough.

-------------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers