Sunday, August 17, 2008

[ANNOUNCE] == PostgreSQL Weekly News - August 17 2008 ==

== PostgreSQL Weekly News - August 17 2008 ==

== PostgreSQL Jobs for August ==

http://archives.postgresql.org/pgsql-jobs/2008-08/threads.php

== PostgreSQL Local ==

The Prato Linux User Group will be having PostgreSQL talks in
September. The schedule in Italian is:
http://www.prato.linux.it/serate_a_tema_2008

PGCon Brazil 2008 will be on September 26-27 at Unicamp in Campinas.
http://pgcon.postgresql.org.br/index.en.html

PgDay.fr will be October 4 in Toulouse. The Call for Papers is open:
http://www.postgresqlfr.org/?q=node/1686
Registration:
http://www.pgday.fr/doku.php/inscription

PostgreSQL Conference West 2008 will be October 10-12 at Portland
State University in Portland, Oregon.
http://www.postgresqlconference.org/
Talk submission at:
http://www.postgresqlconference.org/west08/talk_submission/

Sponsor the European PGDay!
http://www.pgday.org/en/sponsors/campaign

The Call for Papers for European PGDay has begun.
http://www.pgday.org/en/call4papers

PGDay.(IT|EU) 2008 will be October 17 and 18 in Prato.
http://www.pgday.org/it/

== PostgreSQL in the News ==

Planet PostgreSQL: http://www.planetpostgresql.org/

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david@fetter.org, German language
to pwn@pgug.de, Italian language to pwn@itpug.org.

== Applied Patches ==

Heikki Linnakangas committed:

- Introduce the concept of relation forks. An smgr relation can now
consist of multiple forks, and each fork can be created and grown
separately. The bulk of this patch is about changing the smgr API
to include an extra ForkNumber argument in every smgr function.
Also, smgrscheduleunlink and smgrdounlink no longer implicitly call
smgrclose, because other forks might still exist after unlinking
one. The callers of those functions have been modified to call
smgrclose instead. This patch in itself doesn't have any
user-visible effect, but provides the infrastructure needed for
upcoming patches. The additional forks envisioned are a rewritten
FSM implementation that doesn't rely on a fixed-size shared memory
block, and a visibility map to allow skipping portions of a table in
VACUUM that have no dead tuples.

- Relation forks patch requires a catversion bump due to changes in
the format of some WAL records, and two-phase state files, which I
forgot.

- pg_buffercache needs to be taught about relation forks, as Greg
Stark pointed out.

- Fix pull_up_simple_union_all to copy all rtable entries from child
subquery to parent, not only those with RangeTblRefs. We need them
in ExecCheckRTPerms. Report by Brendan O'Shea. Back-patch to 8.2,
where pull_up_simple_union_all was introduced.

Bruce Momjian committed:

- Add to TODO: "Add 'hostgss' pg_hba.conf option to allow GSS
link-level encryption."

- Add URL for TODO: "Improve ability to modify views via ALTER TABLE."

- Fix version warning bug in recently applied adjustments to psql
startup. Gregory Stark

- Add new SQL training web site to FAQ:

- Update Russian FAQ. Viktor Vislobokov.

- In pgsql/src/interfaces/libpq/bcc32.mak, synchronize Borland libpq
makefile to match MSVC. Backpatch to 8.3.X.

- Add to TODO: "Prevent query cancel packets from being replayed by an
attacker, especially when using SSL."

- Add URL for TODO: "Improve ability to modify views via ALTER TABLE."

- Update instructions on generating TODO.html

Peter Eisentraut committed:

- Remove TODO item, "Allow XML to accept more liberal DOCTYPE
specifications." Everything works correctly, per today's email to
-general.

Alvaro Herrera committed:

- Have autovacuum consider processing TOAST tables separately from
their main tables. This requires vacuum() to accept processing a
toast table standalone, so there's a user-visible change in that
it's now possible (for a superuser) to execute "VACUUM
pg_toast.pg_toast_XXX".

Tom Lane committed:

- Implement SEMI and ANTI joins in the planner and executor.
(Semijoins replace the old JOIN_IN code, but antijoins are new
functionality.) Teach the planner to convert appropriate EXISTS and
NOT EXISTS subqueries into semi and anti joins respectively. Also,
LEFT JOINs with suitable upper-level IS NULL filters are recognized
as being anti joins. Unify the InClauseInfo and OuterJoinInfo
infrastructure into "SpecialJoinInfo". With that change, it becomes
possible to associate a SpecialJoinInfo with every join attempt,
which permits some cleanup of join selectivity estimation. That
needs to be taken much further than this patch does, but the next
step is to change the API for oprjoin selectivity functions, which
seems like material for a separate patch. So for the moment the
output size estimates for semi and especially anti joins are quite
bogus.

- Performance fix for new anti-join code in nodeMergejoin.c: after
finding a match in antijoin mode, we should advance to next outer
tuple not next inner. We know we don't want to return this outer
tuple, and there is no point in advancing over matching inner tuples
now, because we'd just have to do it again if the next outer tuple
has the same merge key. This makes a noticeable difference if there
are lots of duplicate keys in both inputs. Similarly, after finding
a match in semijoin mode, arrange to advance to the next outer tuple
after returning the current match; or immediately, if it fails the
extra quals. The rationale is the same. (This is a performance bug
in existing releases; perhaps worth back-patching? The planner
tries to avoid using mergejoin with lots of duplicates, so it may
not be a big issue in practice.) Nestloop and hash got this right to
start with, but I made some cosmetic adjustments there to make the
corresponding bits of logic look more similar.

- Clean up the loose ends in selectivity estimation left by my patch
for semi and anti joins. To do this, pass the SpecialJoinInfo
struct for the current join as an additional optional argument to
operator join selectivity estimation functions. This allows the
estimator to tell not only what kind of join is being formed, but
which variable is on which side of the join; a requirement long
recognized but not dealt with till now. This also leaves the door
open for future improvements in the estimators, such as accounting
for the null-insertion effects of lower outer joins. I didn't do
anything about that in the current patch but the information is in
principle deducible from what's passed. The patch also clarifies
the definition of join selectivity for semi/anti joins: it's the
fraction of the left input that has (at least one) match in the
right input. This allows getting rid of some very fuzzy thinking
that I had committed in the original 7.4-era IN-optimization patch.
There's probably room to estimate this better than the present patch
does, but at least we know what to estimate. Since I had to touch
CREATE OPERATOR anyway to allow a variant signature for join
estimator functions, I took the opportunity to add a couple of
additional checks that were missing, per my recent message to
-hackers:
* Check that estimator functions return float8; Require execute
* permission at the time of CREATE OPERATOR on the
operator's function as well as the estimator functions;
* Require ownership of any pre-existing operator that's modified by
the command. I also moved the lookup of the functions out of
OperatorCreate() and into operatorcmds.c, since that seemed more
consistent with most of the other catalog object creation processes,
eg CREATE TYPE.

- Fix a couple of places where psql might fail to report a suitable
error if PQexec returns NULL. These don't seem significant enough
to be worth back-patching, but they ought to get fixed ...

- In pgsql/src/bin/pg_dump/pg_backup_db.c, fix pg_dump/pg_restore's
ExecuteSqlCommand() to behave suitably if PQexec returns NULL
instead of a PGresult. The former coding would fail, which is OK,
but it neglected to give you the PQerrorMessage that might tell you
why. In the oldest branches, there was another problem: it'd
sometimes report PQerrorMessage from the wrong connection.

- Improve sublink pullup code to handle ANY/EXISTS sublinks that are
at top level of a JOIN/ON clause, not only at top level of WHERE.
(However, we can't do this in an outer join's ON clause, unless the
ANY/EXISTS refers only to the nullable side of the outer join, so
that it can effectively be pushed down into the nullable side.) Per
request from Kevin Grittner. In passing, fix a bug in the initial
implementation of EXISTS pullup: it would Assert if the EXIST's
WHERE clause used a join alias variable. Since we haven't yet
flattened join aliases when this transformation happens, it's
necessary to include join relids in the computed set of RHS relids.

- In pgsql/src/backend/optimizer/plan/subselect.c, remove prohibition
against SubLinks in the WHERE clause of an EXISTS subquery that
we're considering pulling up. I hadn't wanted to think through
whether that could work during the first pass at this stuff.
However, on closer inspection it seems to be safe enough.

- In pgsql/src/backend/optimizer/path/joinrels.c, add some defenses
against constant-FALSE outer join conditions. Since
eval_const_expressions will generally throw away anything that's
ANDed with constant FALSE, what we're left with given an example
like select * from tenk1 a where (unique1,0) in (select unique2,1
from tenk1 b); is a cartesian product computation, which is really
not acceptable. This is a regression in CVS HEAD compared to
previous releases, which were able to notice the impossible join
condition in this case --- though not in some related cases that are
also improved by this patch, such as select * from tenk1 a left join
tenk1 b on (a.unique1=b.unique2 and 0=1); Fix by skipping evaluation
of the appropriate side of the outer join in cases where it's
demonstrably unnecessary.

Magnus Hagander committed:

- Make the temporary directory for pgstat files configurable by the
GUC variable stats_temp_directory, instead of requiring the admin to
mount/symlink the pg_stat_tmp directory manually. For now the
config variable is PGC_POSTMASTER. Room for further improvment that
would allow it to be changed on-the-fly. Original idea by Euler
Taveira de Oliveira.

- In pgsql/src/tools/msvc/Solution.pm, probes.h is generated from
probes.d, not pg_trace.d.

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

Zdenek Kotala sent in a patch to introduce a page layout footprint in
aid of his in-place upgrades work.

David Wheeler sent in another patch to clean up the citext contrib
module.

Zdenek Kotala sent in a WIP patch for his new page API in aid of his
in-place upgrades work.

ITAGAKI Takahiro sent in another revision of his "Copy storage
parameters" patch.

Gregory Stark sent in a patch to fix a bug in psql where old variables
were being used for the new database connection after \c.

ITAGAKI Takahiro sent in a patch to add duration option (-T) to
pgbench instead of number of transactions (-t). -t and -T are mutually
exclusive.

Jan Urbanski sent in two more revisions of his oprrest patch for text
search.

Dmitri Koterov sent in a patch to add three new functions to
contrib/intarray: int_array_append_aggregate(int[]), which quickly
merges arrays, _int_group_count_sort(int[], bool), a frequency-based
sort, and bidx(int[], int), a binary search in a sorted array.

Martin Pihlak sent in a patch that implements plan invalidation on
function DROP, REPLACE and ALTER.

Ryan Bradetich sent in a patch to implement unsigned integer types.

Magnus Hagander sent in a patch to do pg_hba.conf and postgresql.conf
parsing in the postmaster and matching in the backend. Per
discussion.


---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

Re: [ADMIN] DB Dump Size

I am also curious as to why an SQL dump from the production server would
come out to 2.8G but a dump of an exact replica on a test box would come
out to 3.0G. What determines the size and makeup of an SQL dump?

Cheers.
Steve.


On Fri, 15 Aug 2008 12:24:32 +0200, Tino Schwarze <postgresql@tisc.de>
wrote:
> On Fri, Aug 15, 2008 at 11:09:02AM +1000, steve@outtalimits.com.au wrote:
>> Nice, that has cleared it up.
>>
>> I am on 8.1 also.
>>
>> On my test box, a standard dump took 6m 26sec & a -Fc dump took 11min
> 2sec.
>> That's not a great difference, but the size difference is quite
> noticeable.
>
> You might use --compress=6 or even --compress=1 to lower the impact of
> compression and try again. On the other hand, 11 minutes is not a big
> deal for dumping a whole DB...
>
> Tino.
>
> --
> "What we nourish flourishes." - "Was wir nähren erblüht."
>
> www.craniosacralzentrum.de
> www.forteego.de
>
>

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] proposal sql: labeled function params

> uups, completely forgot dual use of = for both assignment and
> comparison.
>
> Maybe we can do without any "keyword arguments" or "labeled function
> params" if we define a way to construct records in-place.

That sounds a lot cleaner to me.

> something like
> RECORD( 'Zdanek'::text AS name, 22::int AS age); -- like SELECT
> or
> RECORD( name 'Zdanek'::text, age 22::int); -- like CREATE TABLE/TYPE
> or
> RECORD(name, age) .... from sometable; -- get values & types from table

In most cases, you can just do this using SELECT without the need for
any special syntax. For example:

SELECT json(p) FROM person p;
SELECT json(p) FROM (SELECT first_name, last_name FROM person) p;

The only problem is that this doesn't work if you try to put the
select into the attribute list:

SELECT json(select first_name, last_name) FROM person p;
ERROR: syntax error at or near "select"
SELECT json((select first_name, last_name)) FROM person p;
ERROR: subquery must return only one column

Unfortunately this is a pretty common situation, because you might
easily want to do:

SELECT json((select first_name, last_name)), age FROM person p;

...and you are out of luck.

I'm not sure whether the ROW() syntax could possibly be extended to
address this problem. It doesn't seem to help in its present form.

> Then we could pass these records to any PL for processing with minimal
> confusion to programmer, and without introducing new concepts like
> "variadic argument position labels"

Amen.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] long transaction

On Wed, Aug 13, 2008 at 2:07 AM, Sabin Coanda
<sabin.coanda@deuromedia.ro> wrote:
>>
>> have you considered importing to a temporary 'holding' table with
>> copy, then doing 'big' sql statements on it to check constraints, etc?
>>
>
> Yes I considered it, but the problem is the data is very tight related
> between different tables and is important to keep the import order of each
> entity into the database. With other words, the entity imprt serialization
> is mandatory. In fact the import script doesn't keep just insert but also
> delete and update for different entities. So copy is not enough. Also using
> 'big' sql statements cannot guarantee the import order.

More than likely, to solve your problem (outside of buying bigger box
or hacking fsync) is to rethink your import along the lines of what
I'm suggesting. You're welcome to give more specific details of
what/how your imports are running, in order to get more specific
advice.

merlin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Patch: plan invalidation vs stored procedures

On Sat, Aug 16, 2008 at 09:40:19PM -0400, Tom Lane wrote:
> "Asko Oja" <ascoja@gmail.com> writes:
> > Is it possible to get it into some official 8.3.x release
>
> This is not the kind of patch we put into stable branches.

Does this really count as a user-visible change, except in the sense
that they won't see things erroring out? It doesn't add new syntax,
as far as I can tell.

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

On Sun, 2008-08-17 at 18:24 -0400, Robert Haas wrote:
> > Actually the most "natural" syntax to me is just f(name=value) similar
> > to how UPDATE does it. It has the added benefit of _not_ forcing us to
> > make a operator reserved (AFAIK "=" can't be used to define new ops)
>
> The problem with this is that
>
> SELECT foo(a = b)
>
> ...is already valid syntax.

uups, completely forgot dual use of = for both assignment and
comparison.

> It means compare a with b and pass the
> resulting boolean to foo. I'm almost positive that changing this
> would break all kinds of existing code (and probably create a lot of
> grammar problems too). It's not an issue with SET because in that
> case the "name=" part of the syntax is required rather than optional.

Maybe we can do without any "keyword arguments" or "labeled function
params" if we define a way to construct records in-place.

something like

RECORD( 'Zdanek'::text AS name, 22::int AS age); -- like SELECT

or

RECORD( name 'Zdanek'::text, age 22::int); -- like CREATE TABLE/TYPE


or

RECORD(name, age) .... from sometable; -- get values & types from table

?

Then we could pass these records to any PL for processing with minimal
confusion to programmer, and without introducing new concepts like
"variadic argument position labels"

-------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

On Aug 17, 2008, at 1:48 PM, Greg Smith wrote:

> On Wed, 13 Aug 2008, Michael Nacos wrote:
>
>> Hi there... Configuration autotuning is something I am really
>> interested in.
>> I have seen this page: http://wiki.postgresql.org/wiki/
>> GUCS_Overhaul and
>> a couple of emails mentioning this, so I wanted to ask is someone
>> already
>> on it? If yes, I'd like to contribute.
>
> Good time to give a status report on what's been going on with all
> this.
>
> With some help I just finished off an answer to problem #1 there
> recently, "Most people have no idea how to set these". There was
> some concern here that work was being done on config tools without a
> clear vision of what was going to be tuned. See http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
> for an intro on how to set the 18 most important parameters (+7
> logging parameters) based on the best information I'm aware of.
>
> Circa June, Steve Atkins was looking into writing a C++/Qt GUI
> tuning interface application, with the idea that somebody else would
> figure out the actual smarts to the tuning effort. Don't know where
> that's at.

First pass is done. Needs a little cleanup before sharing. I spent a
fair while down OS-specific-hardware-queries rathole, but I'm better
now.

Cheers,
Steve


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

> Actually the most "natural" syntax to me is just f(name=value) similar
> to how UPDATE does it. It has the added benefit of _not_ forcing us to
> make a operator reserved (AFAIK "=" can't be used to define new ops)

The problem with this is that

SELECT foo(a = b)

...is already valid syntax. It means compare a with b and pass the
resulting boolean to foo. I'm almost positive that changing this
would break all kinds of existing code (and probably create a lot of
grammar problems too). It's not an issue with SET because in that
case the "name=" part of the syntax is required rather than optional.

Any other operator you pick is going to have this same problem unless
it's already impossible to use that operator as part of an expression.
For that reason, while I'm not convinced of the value of the feature,
if we're going to support it then ISTM that expr AS label is the way
to go. That also has the advantage of being consistent with the
syntax for table and column aliasing.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

On Sun, 17 Aug 2008, Tom Lane wrote:

> What we have now was named Grand Unified Configuration for a reason:
> it centralized the handling of what had been a mess of different things
> configured in different ways. I'm not eager to go backwards on that.

No need to change anything related to how the configuration is done.
There's really only two things wrong with what's there right now IMHO and
they don't require any changes to the internals, just what's shown:

1) The view should show both how the user defined the setting and how it's
represented internally. Basically something that looks like this:

select name,current_setting(name) as input_setting,setting from
pg_settings;

2) Expose the default value.

> I'm also interested to know exactly what such a table would provide
> that isn't already available in the form of the pg_settings view.

Links to the relevant documentation and a place to save both default and
user comments about the setting were two things being considered that
seemed a really bad fit to tack onto the GUC structure. There's some
others. The main point is that that nobody wants to have to tinker with
the core GUC itself just to decorate it with more information, that is
complicated enough as it is.

One might make a case that the stuff the GUC must handle (settings, units,
type, defaults, etc.) could be usefully separated from all the more
documentation-oriented bits stored there right now (category,
descriptions), and that the existing documentation bits could move over to
the table along with the hyperlinks and such. Doing that adds another
place to have to edit, but I think there's an even exchange available
there because it enables easy auto-generation of the postgresql.conf file
at initdb time from that table + pg_settings.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

On Sun, 2008-08-17 at 17:59 +0200, Pavel Stehule wrote:
> Hannu
>
> it's not possible in plpgsql, because we are not able iterate via record.

just add function for iterating over record :)

create or replace function json(r record)
returns varchar as $$
select '[' || array_to_string(
array(
select (getfieldnames(r))[i]|| ':' || getfieldvalue(r,i)
from generate_subscripts(r,1) g(i))
,',') || ']'
$$ language sql immutable strict;

(this is a straight rewrite of your original sample, one can also do it
in a simpler way, with a function returning SETOF (name, value) pairs)

postgres=# select json(name='Zdenek',age=30);
json
----------------------
[name:Zdenek,age:30]
(1 row)

postgres=# select json(name, age) from person;
json
----------------------
[name:Zdenek,age:30]
(1 row)

BTW, json actually requires quoting names/labels, so the answer should
be

["name":"Zdenek","age":"30"]


>
> 2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
> > On Sun, 2008-08-17 at 11:08 -0400, Tom Lane wrote:
> >> Hannu Krosing <hannu@2ndQuadrant.com> writes:
> >> > Actually the most "natural" syntax to me is just f(name=value) similar
> >> > to how UPDATE does it. It has the added benefit of _not_ forcing us to
> >> > make a operator reserved (AFAIK "=" can't be used to define new ops)
> >>
> >> *What* are you thinking?
> >
> > I think that we could achieve what Pavel was after by allowing one to
> > define something similar to keyword arguments in python.
> >
> > maybe allow input RECORD type, which is instantiated at call time by
> > giving extra arguments to function call:
> >
> > CREATE FUNCTION f_kw(r record) ....
> >
> > and then if you call it like this:
> >
> > SELECT ... f_kw(name='bob', age=7::int)
> >
> > then function gets as its input a record
> > which can be accessed in pl/pgsql like
> >
> > r.name r.age
> >
> > and if terseness is really appreciated then the it could also be called
> > like this
> >
> > SELECT ... f_kw(name, age) from people where name='bob';
> >
> > which is rewritten to
> >
> > SELECT ... f_kw(name=name, age=age) from people where name='bob';
> >
> >
> > not sure if we should allow defining SETOF RECORD and then enable
> > calling it with
> >
> > SELECT *
> > FROM f_kw(
> > VALUES(name='bob', age=7::int),
> > VALUES(name='bill', age=42::int
> > );
> >
> > or somesuch
> >
> > ------------------
> > Hannu
> >
> >
> >
>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] IN vs EXISTS equivalence

If you're still interested in testing CVS HEAD's handling of EXISTS,
I've about finished what I wanted to do with it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

Greg Smith <gsmith@gregsmith.com> writes:
> Josh Berkus and I have been exchanging some ideas for the GUC internals
> overhaul and had a quick discussion about that in person last month.
> We've been gravitating toward putting all the extra information we'd like
> to push into there in an extra catalog table (pg_settings_info or
> something). The stuff the server needs to start can stay right where it
> is right now, all the other decoration can move to the table.

Somehow, the attraction of that idea escapes me.

What we have now was named Grand Unified Configuration for a reason:
it centralized the handling of what had been a mess of different things
configured in different ways. I'm not eager to go backwards on that.

I'm also interested to know exactly what such a table would provide
that isn't already available in the form of the pg_settings view.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] API for Managing pg_hba and postgresql.conf

On Thu, 14 Aug 2008, Andrew Satori wrote:

> What I'm seeing is a default installation protects the Data directory
> properly, but in so doing means that altering the configuration files,
> pg_hba.conf and postgresql.conf require database administrators, who should
> not necessarily have a level of rights to become superuser at the file system
> level to alter the mentioned files.

This suggests you're doing something wrong, and it's no wonder you think
this is a serious management problem (I consider it a minor one).


In a typical installation, the data directory that contains the database
and configuring files will be owned by a database user, typically
"postgres". DBAs should be able to get to all of those by switching to
that user, without need any true superuser rights. If that isn't the case
in your environment, I'm curious how you ended up there, and it's no
wonder that's a serious management problem for you. I consider this area
only a minor annoyance. Normally the only thing I see that you can't do
as the postgres user is directly execute the database start/stop scripts
that root runs ("service postgresql start" on RedHat for example), but
since all that can be done via pg_ctl instead this is an easily scriptable
issue to work around.

> A huge portion of the motivation here is to allow for easy to graphical
> administration interfaces, making the system more approachable, and to make
> remote administration of these files less cumbersome.

There's already a bunch of work in that area going on already. Check the
archives for the "GUCS Overhaul" and "Parsing of pg_hba.conf and
authentication inconsistencies" threads for two long-running discussions
of the fundamental server-side issues needed to pull that off. You
touched on the main one here:

> Since most changes would require a SIGHUP, should the server process
> itself be alter to allow for a dynamic restart from within the
> environment?

Many of the problems with what you're thinking of revolve around the
unfortunate answers that pop up when you ask "what if I try and put bad
information in postgresql.conf/pg_hba.conf?"; see those threads I
mentioned for some context on that.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Overhauling GUCS

On Wed, 13 Aug 2008, Michael Nacos wrote:

> Hi there... Configuration autotuning is something I am really interested in.
> I have seen this page: http://wiki.postgresql.org/wiki/GUCS_Overhaul and
> a couple of emails mentioning this, so I wanted to ask is someone already
> on it? If yes, I'd like to contribute.

Good time to give a status report on what's been going on with all this.

With some help I just finished off an answer to problem #1 there recently,
"Most people have no idea how to set these". There was some concern here
that work was being done on config tools without a clear vision of what
was going to be tuned. See
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for an intro
on how to set the 18 most important parameters (+7 logging parameters)
based on the best information I'm aware of.

Circa June, Steve Atkins was looking into writing a C++/Qt GUI tuning
interface application, with the idea that somebody else would figure out
the actual smarts to the tuning effort. Don't know where that's at.

Josh Berkus and I have been exchanging some ideas for the GUC internals
overhaul and had a quick discussion about that in person last month.
We've been gravitating toward putting all the extra information we'd like
to push into there in an extra catalog table (pg_settings_info or
something). The stuff the server needs to start can stay right where it
is right now, all the other decoration can move to the table.

> Ideally, an external little app should also provide recommendations based
> on current database usage statistics -- wouldn't this constitute something
> akin to application-specific advice?

Yes, there's a grand plan for a super-wizard that queries the database for
size, index, and statistics information for figure out what to do; I've
been beating that drum for a while now. Unfortunately, the actual
implementation is blocked behind the dreadfully boring work of sorting out
how to organize and manage the GUC information a bit better, and the
moderately boring work of building a UI for modifying things. If you were
hoping to work on the sexy autotuning parts without doing some of the
grunt work, let me know if you can figure out how so I can follow you.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Delete across 5 Joined tables

On 15/08/2008 11:08, Fuzzygoth wrote:

> but I need to be able to delete from up to 5 different tables, is
> there a quick and
> simple way of doing this? can someone post an example?

Just guessing without more detailed information, but are the tables
related one-to-many using foreign keys? If so, when defining the foreign
key you can set the delete to cascade whenever a "master" row is deleted:

alter table foo
add constraint foo_fk foreign key [...]
on delete cascade;

...or something like that.

Ray.


------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] pgbench duration option

On Tue, 12 Aug 2008, Tom Lane wrote:

> This seems like a fairly bad idea, because it introduces a
> gettimeofday() call per transaction.

There's already lots of paths through pgbench that introduce gettimeofday
calls all over the place. I fail to see how this is any different.

> If this were worth doing (which IMHO it isn't)

I think that switching the recommended practice for running pgbench to
something time-based rather than transactions-based would increase the
average quality of results people got considerably. How many times do you
see people posting numbers that worthless because the test ran for a
trivial amount of time? Seems like it happens a lot to me. This patch
was already on my todo list for 8.4 and I'm glad I don't have to write it
myself now.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Add some defenses against constant-FALSE outer join conditions.

Log Message:
-----------
Add some defenses against constant-FALSE outer join conditions. Since
eval_const_expressions will generally throw away anything that's ANDed with
constant FALSE, what we're left with given an example like

select * from tenk1 a where (unique1,0) in (select unique2,1 from tenk1 b);

is a cartesian product computation, which is really not acceptable.
This is a regression in CVS HEAD compared to previous releases, which were
able to notice the impossible join condition in this case --- though not in
some related cases that are also improved by this patch, such as

select * from tenk1 a left join tenk1 b on (a.unique1=b.unique2 and 0=1);

Fix by skipping evaluation of the appropriate side of the outer join in
cases where it's demonstrably unnecessary.

Modified Files:
--------------
pgsql/src/backend/optimizer/path:
joinrels.c (r1.93 -> r1.94)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/path/joinrels.c?r1=1.93&r2=1.94)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] proposal sql: labeled function params

2008/8/17 Asko Oja <ascoja@gmail.com>:
> Not able to means not implementable o not implemented ?

Almost not implementable - plpgsql is too static language.

>
> On Sun, Aug 17, 2008 at 6:59 PM, Pavel Stehule <pavel.stehule@gmail.com>
> wrote:
>>
>> Hannu
>>
>> it's not possible inNot able to plpgsql, because we are not able iterate
>> via record.
>>
>> Pavel
>>
>> 2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
>> > On Sun, 2008-08-17 at 11:08 -0400, Tom Lane wrote:
>> >> Hannu Krosing <hannu@2ndQuadrant.com> writes:
>> >> > Actually the most "natural" syntax to me is just f(name=value)
>> >> > similar
>> >> > to how UPDATE does it. It has the added benefit of _not_ forcing us
>> >> > to
>> >> > make a operator reserved (AFAIK "=" can't be used to define new ops)
>> >>
>> >> *What* are you thinking?
>> >
>> > I think that we could achieve what Pavel was after by allowing one to
>> > define something similar to keyword arguments in python.
>> >
>> > maybe allow input RECORD type, which is instantiated at call time by
>> > giving extra arguments to function call:
>> >
>> > CREATE FUNCTION f_kw(r record) ....
>> >
>> > and then if you call it like this:
>> >
>> > SELECT ... f_kw(name='bob', age=7::int)
>> >
>> > then function gets as its input a record
>> > which can be accessed in pl/pgsql like
>> >
>> > r.name r.age
>> >
>> > and if terseness is really appreciated then the it could also be called
>> > like this
>> >
>> > SELECT ... f_kw(name, age) from people where name='bob';
>> >
>> > which is rewritten to
>> >
>> > SELECT ... f_kw(name=name, age=age) from people where name='bob';
>> >
>> >
>> > not sure if we should allow defining SETOF RECORD and then enable
>> > calling it with
>> >
>> > SELECT *
>> > FROM f_kw(
>> > VALUES(name='bob', age=7::int),
>> > VALUES(name='bill', age=42::int
>> > );
>> >
>> > or somesuch
>> >
>> > ------------------
>> > Hannu
>> >
>> >
>> >
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] any psql static binary for iphone ?

Hi there,

is there psql static binary, which I can use on my iphone (version 1) ?
I have no mac available, so I can't compile it myself. I heard about
gui tool for iphone, but it's not free and is only available from
apple store, which I have no access.

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [COMMITTERS] pgsql: Make the pg_stat_activity view call a SRF

Jaime Casanova wrote:
> On Sat, Aug 16, 2008 at 12:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> regression=# select * from pg_show_all_settings();
>> ERROR: a column definition list is required for functions returning "record"
>>
>> There's no longer any very good reason for built-in SRFs to not define
>> their own output record type.
>>
>
> is there any one doing this? if not i want to give it a try... seems
> easy enough, even for me :)

It's on my list, but I haven't actually started it yet. So - take it away!

//Magnus

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] [PgFoundry] Unsigned Data Types

I can say that we have had several times to use bigint instead because of the lack of uint type in postgres.

On Sun, Aug 17, 2008 at 9:03 PM, Ryan Bradetich <rbradetich@gmail.com> wrote:
On Sat, Aug 16, 2008 at 10:53 AM, Decibel! <decibel@decibel.org> wrote:
On Aug 15, 2008, at 1:00 AM, Ryan Bradetich wrote:
Here is the first pass at the unsigned data type I have been working on.

I am planning on adding these to the September 2008 commitfest wiki page.
The unsigned data type is not targeted for core, but for the uint PgFoundry project.


Is the intention for the types to go into pg_catalog? It'd be nice if you could specify what schema they should be installed in. An uninstall would also be good.

The pg_catalog made since to me at first (especially for my application), but on reflection I believe you are right.   I will remove the references to the pg_catalog schema and allow the user to add the unsigned data type to any schema.  Good catch on the uninstall script.  I should have written this as well.   I will post an update to the wiki later tonight.
 

Thanks for doing this, I've wished we had uint types in the past, and I'm sure I will again in the future!

I am glad it is useful.  I needed it for my current project, and I was hoping others could use it as well.

Thanks,

- Ryan



Re: [HACKERS] proposal sql: labeled function params

"Pavel Stehule" <pavel.stehule@gmail.com> writes:

> 2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
>> On Sun, 2008-08-17 at 08:06 +0200, Pavel Stehule wrote:
>>> 2008/8/16 Decibel! <decibel@decibel.org>:
>>> > SQL-like would be value AS name, but I'm not a fan of putting the value
>>> > before the name. And I think value AS name will just lead to a ton of
>>> > confusion.
>>> >
>>> > Since I think it'd be very unusual to do a => (b => c), I'd vote that we
>>> > just go with =>. Anyone trying to do a => b => c should immediately question
>>> > if that would work.
>>>
>>> I'll look on this syntax - what is really means for implementation. I
>>> thing, mostly of us prefer this or similar syntax.
>>
>> Actually the most "natural" syntax to me is just f(name=value) similar
>> to how UPDATE does it. It has the added benefit of _not_ forcing us to
>> make a operator reserved (AFAIK "=" can't be used to define new ops)

This whole thing seems like a ridiculous idea. It's a fancy way of passing an
extra parameter to the function intended to be used for a particular "label"
purpose. Your xml function could just as easily take two functions
f(name,value) instead of using a special spelling for ",".

That it is easily confused with named parameters means there are huge
downsides and no significant up-sides to having this trivial little bit of
syntactic sugar.

To say nothing that using "=>" or anything like that would be just completely
un-SQLish.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

Not able to means not implementable o not implemented ?

On Sun, Aug 17, 2008 at 6:59 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
Hannu

it's not possible inNot able to  plpgsql, because we are not able iterate via record.

Pavel

2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
> On Sun, 2008-08-17 at 11:08 -0400, Tom Lane wrote:
>> Hannu Krosing <hannu@2ndQuadrant.com> writes:
>> > Actually the most "natural" syntax to me is just f(name=value) similar
>> > to how UPDATE does it. It has the added benefit of _not_ forcing us to
>> > make a operator reserved (AFAIK "=" can't be used to define new ops)
>>
>> *What* are you thinking?
>
> I think that we could achieve what Pavel was after by allowing one to
> define something similar to keyword arguments in python.
>
> maybe allow input RECORD type, which is instantiated at call time by
> giving extra arguments to function call:
>
> CREATE FUNCTION f_kw(r record) ....
>
> and then if you call it like this:
>
> SELECT ... f_kw(name='bob', age=7::int)
>
> then function gets as its input a record
> which can be accessed in pl/pgsql like
>
> r.name r.age
>
> and if terseness is really appreciated then the it could also be called
> like this
>
> SELECT ... f_kw(name, age) from people where name='bob';
>
> which is rewritten to
>
> SELECT ... f_kw(name=name, age=age) from people where name='bob';
>
>
> not sure if we should allow defining SETOF RECORD and then enable
> calling it with
>
> SELECT *
>  FROM f_kw(
>    VALUES(name='bob', age=7::int),
>    VALUES(name='bill', age=42::int
>  );
>
> or somesuch
>
> ------------------
> Hannu
>
>
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Open source PostgreSQL type parsers in PHP?

Hello.

Is there any open source and well-tested PHP libraries to parse and build the following PostgreSQL data types?

- arrays (including N-dimensional): {a,b,c}
- hstore: "a=>b, c=>d"
- ROW: ("a","b","(""c"",""d"")")

E.g. I have some PHP variable and want to build corresponding PostgreSQL value.
Or, I have fetched a PostgreSQL value as a string and want to parse it into PHP variable.

Re: [HACKERS] [PgFoundry] Unsigned Data Types

On Sat, Aug 16, 2008 at 10:53 AM, Decibel! <decibel@decibel.org> wrote:
On Aug 15, 2008, at 1:00 AM, Ryan Bradetich wrote:
Here is the first pass at the unsigned data type I have been working on.

I am planning on adding these to the September 2008 commitfest wiki page.
The unsigned data type is not targeted for core, but for the uint PgFoundry project.


Is the intention for the types to go into pg_catalog? It'd be nice if you could specify what schema they should be installed in. An uninstall would also be good.

The pg_catalog made since to me at first (especially for my application), but on reflection I believe you are right.   I will remove the references to the pg_catalog schema and allow the user to add the unsigned data type to any schema.  Good catch on the uninstall script.  I should have written this as well.   I will post an update to the wiki later tonight.
 

Thanks for doing this, I've wished we had uint types in the past, and I'm sure I will again in the future!

I am glad it is useful.  I needed it for my current project, and I was hoping others could use it as well.

Thanks,

- Ryan


[GENERAL] Re: How to execute 'set session role' from plpgsql function?

Thanks. You suggestion helped me to find a solution!

DECLARE
...
query text;
BEGIN
....
query := query :='SET SESSION ROLE ' || wishedrole;
EXECUTE query;
......


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [DOCS] TODO build rule

On Sat, Aug 16, 2008 at 03:39:59PM -0400, Bruce Momjian wrote:
> Peter Eisentraut wrote:
> > On Saturday 16 August 2008 03:22:09 Bruce Momjian wrote:
> > > Peter Eisentraut wrote:
> > > These two lines in pgsql/src/tools/TODO2html should be adding
> > > "*"s which is then interpreted as "<em>":
> >
> > So there are two inconsistent sets of instructions for creating
> > TODO.html, one in doc/src/FAQ/README and one in
> > src/tools/TODO2html. Which one is being used?
> >
> > I want to convert the real one to a makefile, btw.
>
> Oh, I forgot that README talked about making TODO.html. I have
> updated the README to suggest using pgsql/src/tools/TODO2html.

I thought this was going onto the wiki so Bruce doesn't have to be the
sole maintainer :)

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

Re: [GENERAL] [Q] DNS(bind) ER model

Mark wrote:
> On Thu, 2008-08-14 at 16:20 -0700, Roderick A. Anderson wrote:
>> Anyone aware of an ER model for holding name server records?
>>
>> Working on the zone file data and I am getting close but keep running
>> into the differences between MX records (with a priority) and the others
>> that can hold either a domain/sub-domain/host name or an IP address
>> depending on whether is an A, TXT, PTR, etc. or a CNAME.
>
> Don't add a column for the prio of the MX record. A lot of designs do
> this but IMHO it's crap. If you do it for MX records you also have to do
> it for SRV records and who knows what other (future) records).

I was working towards that direction. I really hated the idea of a
sparse table and even a sparse column.

>
> We (@work) use an in house designed database that use the best technique
> I've ever seen.
>
> What it does is use a table to list all the valid RR types(1) along with
> an ID and regular expressions that describe what the name and rdata
> should look like.

Interesting idea. A project I worked on awhile ago did something
similar. Actually stored some Perl code.

> In the table that holds the dns records the type of the record is
> foreign key referencing the type table. And insert/update triggers are
> used to check that records match the patters in the types table.

Better and better.

> With this technique supporting a new record type is as easy as inserting
> a new row in the types table. And it also garanties that all records in
> the database is at least syntactically correct. (Assuming your patters
> are correct of course.)


Thanks,
Rod
--
>> Much of the database will be populated and changed automagically so the
>> controller for the application will do the right thing but humans will
>> get involved every so often. I hope I can get the database to make the
>> right thing easy and the wrong thing "impossible" for them.
>>
>> Any suggestions?
>
> HTH.
>
> Cheers,
> Mark.
>
>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] How to execute 'set session role' from plpgsql function?

Just a suggestion:

EXECUTE 'SET SESSION ROLE wishedrole';

won't help?


2008/8/17 Oleg Vasylenko <pulp@mail.ru>
Hi,everybody!

I wish to have a function with code above, but compiller generate
syntactic error at the line "SET SESSION ROLE wishedrole;".

How to pass the wishedrole value to the structure?

CREATE OR REPLACE FUNCTION f_switch_role(INOUT text,INOUT boolean) AS
 $BODY$
 DECLARE
  wishedrole ALIAS FOR $1;
  resetrole ALIAS FOR $2;
 BEGIN
  if resetrole=true then
           RESET ROLE;
           RETURN;
  end if;

>>>>ERROR OCURS AT THE NEXT LINE <<<<<<
  SET SESSION ROLE wishedrole;
  RETURN;

 END;
 $BODY$
 LANGUAGE 'plpgsql' VOLATILE

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] proposal sql: labeled function params

Hannu

it's not possible in plpgsql, because we are not able iterate via record.

Pavel

2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
> On Sun, 2008-08-17 at 11:08 -0400, Tom Lane wrote:
>> Hannu Krosing <hannu@2ndQuadrant.com> writes:
>> > Actually the most "natural" syntax to me is just f(name=value) similar
>> > to how UPDATE does it. It has the added benefit of _not_ forcing us to
>> > make a operator reserved (AFAIK "=" can't be used to define new ops)
>>
>> *What* are you thinking?
>
> I think that we could achieve what Pavel was after by allowing one to
> define something similar to keyword arguments in python.
>
> maybe allow input RECORD type, which is instantiated at call time by
> giving extra arguments to function call:
>
> CREATE FUNCTION f_kw(r record) ....
>
> and then if you call it like this:
>
> SELECT ... f_kw(name='bob', age=7::int)
>
> then function gets as its input a record
> which can be accessed in pl/pgsql like
>
> r.name r.age
>
> and if terseness is really appreciated then the it could also be called
> like this
>
> SELECT ... f_kw(name, age) from people where name='bob';
>
> which is rewritten to
>
> SELECT ... f_kw(name=name, age=age) from people where name='bob';
>
>
> not sure if we should allow defining SETOF RECORD and then enable
> calling it with
>
> SELECT *
> FROM f_kw(
> VALUES(name='bob', age=7::int),
> VALUES(name='bill', age=42::int
> );
>
> or somesuch
>
> ------------------
> Hannu
>
>
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

On Sun, 2008-08-17 at 11:08 -0400, Tom Lane wrote:
> Hannu Krosing <hannu@2ndQuadrant.com> writes:
> > Actually the most "natural" syntax to me is just f(name=value) similar
> > to how UPDATE does it. It has the added benefit of _not_ forcing us to
> > make a operator reserved (AFAIK "=" can't be used to define new ops)
>
> *What* are you thinking?

I think that we could achieve what Pavel was after by allowing one to
define something similar to keyword arguments in python.

maybe allow input RECORD type, which is instantiated at call time by
giving extra arguments to function call:

CREATE FUNCTION f_kw(r record) ....

and then if you call it like this:

SELECT ... f_kw(name='bob', age=7::int)

then function gets as its input a record
which can be accessed in pl/pgsql like

r.name r.age

and if terseness is really appreciated then the it could also be called
like this

SELECT ... f_kw(name, age) from people where name='bob';

which is rewritten to

SELECT ... f_kw(name=name, age=age) from people where name='bob';


not sure if we should allow defining SETOF RECORD and then enable
calling it with

SELECT *
FROM f_kw(
VALUES(name='bob', age=7::int),
VALUES(name='bill', age=42::int
);

or somesuch

------------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

2008/8/17 Hannu Krosing <hannu@2ndquadrant.com>:
> On Sun, 2008-08-17 at 08:06 +0200, Pavel Stehule wrote:
>> 2008/8/16 Decibel! <decibel@decibel.org>:
>> > On Aug 15, 2008, at 1:20 PM, Hannu Krosing wrote:
>> >>>
>> >>> "value AS name", on the other hand, accomplishes the same in a more
>> >>> SQL-looking fashion with no new reserved word (since AS is already
>> >>> fully reserved).
>> >>
>> >> would it be more natural / SQL-like to use "value AS name" or "name AS
>> >> value" ?
>> >
>> >
>> > IMHO, *natural* would be name *something* value, because that's how every
>> > other language I've seen does it.
>> >
>> > SQL-like would be value AS name, but I'm not a fan of putting the value
>> > before the name. And I think value AS name will just lead to a ton of
>> > confusion.
>> >
>> > Since I think it'd be very unusual to do a => (b => c), I'd vote that we
>> > just go with =>. Anyone trying to do a => b => c should immediately question
>> > if that would work.
>>
>> I'll look on this syntax - what is really means for implementation. I
>> thing, mostly of us prefer this or similar syntax.
>
> Actually the most "natural" syntax to me is just f(name=value) similar
> to how UPDATE does it. It has the added benefit of _not_ forcing us to
> make a operator reserved (AFAIK "=" can't be used to define new ops)
>
> And I still don't think we need two kinds of names ("argument name" and
> "label"). I'd rather see us have the syntax for this be similar to
> python's keyword arguments, even though I'm not entirely opposed to
> automatically generating the name= part if it comes from existing name
> (variable, function argument or column name).
>

I wouldn't mix together two features - argument name (keyword
argument) and labels. Its two different features.

Regards
Pavel Stehule

> ---------------
> Hannu
>
>
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

RE: [pgsql-es-ayuda] Postgres Vs. MySQL



quizas este es tu problema:

Al desinstalar postgresql el usuario limitado postgres quedo guardado, lo q tienes q hacer es lo siguiente:

1. clic derecho sobre my computer
2. clic en manage (administrar)
3. clic en el apartado usuarios
4. En la parte d la derecha se muestran los usuarios q estan instalados en tu S.O. y entre ellos puedes ver a el usuario postgres.
5. Seleccionas el usuario postgres y luego lo eliminas.
6. Trata d volver a instalar Postgresql.


Nos cuentas en la lista como t fue.

Espero q t sirva d algo!!




<html><div>ing. José Fermín Francisco Ferreras <BR>San Francisco de Macorís, Rep. Dom. <BR></div></html>


Date: Sat, 16 Aug 2008 17:23:15 -0400
From: kerljose@gmail.com
To: l-unexlug@unplug.org.ve; pgsql-es-ayuda@postgresql.org
Subject: Re: [pgsql-es-ayuda] Postgres Vs. MySQL
CC: rfs1986@gmail.com

Buenas tardes grupo sabes que estuve que desintalar mi postgres que tenia instalado  en windows para ayudar a una amiga y cuando lo volvi a instalar me presento algunos problemistas como esta "Al intentar reinstalar PostgreSQL aparece una ventana indicando lo siguiente:
Internal account lookup failure: No se ha efectuado ninguna asignación entre los nombres de cuenta y los identificadores de seguridad
".

y he buscado en foros y muchas veces en internet y nada espero que me puedan ayudar del resto tendre que formatear la maquina para volver a instalar


Get more from your digital life. Find out how.

Re: [PERFORM] Optimizing a VIEW

Decibel! wrote:
> On Aug 15, 2008, at 1:36 PM, Madison Kelly wrote:
>> The 'cust_id' references the customer that the given data belongs to.
>> The reason for this "data bucket" (does this structure have a proper
>> name?) is that the data I need to store on a give customer is quite
>> variable and outside of my control. As it is, there is about 400
>> different variable/value pairs I need to store per customer.
>
>
> It's called Entity-Attribute-Value, and it's performance is pretty much
> guaranteed to suck for any kind of a large dataset. The problem is that
> you're storing a MASSIVE amount of extra information for every single
> value. Consider:
>
> If each data point was just a field in a table, then even if we left
> cd_value as text, each data point would consume 4 bytes* + 1 byte per
> character (I'm assuming you don't need extra UTF8 chars or anything). Of
> course if you know you're only storing numbers or the like then you can
> make that even more efficient.
>
> * In 8.3, the text field overhead could be as low as 1 byte if the field
> is small enough.
>
> OTOH, your table is going to 32+24 bytes per row just for the per-row
> overhead, ints and timestamps. Each text field will have 1 or 4 bytes in
> overhead, then you have to store the actual data. Realistically, you're
> looking at 60+ bytes per data point, as opposed to maybe 15, or even
> down to 4 if you know you're storing an int.
>
> Now figure out what that turns into if you have 100 data points per
> minute. It doesn't take very long until you have a huge pile of data
> you're trying to deal with. (As an aside, I once consulted with a
> company that wanted to do this... they wanted to store about 400 data
> points from about 1000 devices on a 5 minute interval. That worked out
> to something like 5GB per day, just for the EAV table. Just wasn't going
> to scale...)
>
> So, back to your situation... there's several things you can do that
> will greatly improve things.
>
> Identify data points that are very common and don't use EAV to store
> them. Instead, store them as regular fields in a table (and don't use
> text if at all possible).
>
> You need to trim down your EAV table. Throw out the added/modified info;
> there's almost certainly no reason to store that *per data point*. Get
> rid of cd_id; there should be a natural PK you can use, and you
> certainly don't want anything else referring to this table (which is a
> big reason to use a surrogate key).
>
> cd_variable and cd_tag need to be ints that point at other tables. For
> that matter, do you really need to tag each *data point*? Probably not...
>
> Finally, if you have a defined set of points that you need to report on,
> create a materialized view that has that information.
>
> BTW, it would probably be better to store data either in the main table,
> or the history table, but not both places.

This is a very long and thoughtful reply, thank you very kindly.

Truth be told, I sort of expected this would be what I had to do. I
think I asked this more in hoping that there might be some "magic" I
didn't know about, but I see now that's not the case. :)

As my data points grow to 500,000+, the time it took to return these
results grew to well over 10 minutes on a decent server and the DB size
was growing rapidly, as you spoke of.

So I did just as you suggested and took the variable names I knew about
specifically and created a table for them. These are the ones that are
being most often updated (hourly per customer) and made each column an
'int' or 'real' where possible and ditched the tracking of the
adding/modifying user and time stamp. I added those out of habit, more
than anything. This data will always come from a system app though, so...

Given that my DB is in development and how very long and intensive it
would have been to pull out the existing data, I have started over and
am now gathering new data. In a week or so I should have the same amount
of data as I had before and I will be able to do a closer comparison test.

However, I already suspect the growth of the database will be
substantially slower and the queries will return substantially faster.

Thank you again!

Madi

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Mini improvement: statement_cost_limit

>>

this entire thing is not about cartesian products at all.
it is about kicking out "expensive" queries before they even start to
eat up tons of CPU.
imagine a user asking for "give me all phone call in the US within the
past 10 years". you could kill the guy instantly because you know that
this would take ages.
in addition to that you know that in an OLTP context everything which
is expected to take longer than X cannot be useful anyway.
this has nothing to do with cartesian products or other bad things you
can do in SQL.
it is just a simple and heuristic check.

many thanks,

hans

> My point is that people should _know_ they are using a cartesian
> product, and a warning would do that for users who have no need for a
> cartesian product and want to be warned about a possible error.
>
> --
> Bruce Momjian <bruce@momjian.us> http://momjian.us
> EnterpriseDB http://enterprisedb.com
>
> + If your life is a hard drive, Christ can be your backup. +


--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: www.postgresql-support.de


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

Hannu Krosing <hannu@2ndQuadrant.com> writes:
> Actually the most "natural" syntax to me is just f(name=value) similar
> to how UPDATE does it. It has the added benefit of _not_ forcing us to
> make a operator reserved (AFAIK "=" can't be used to define new ops)

*What* are you thinking?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] BUG #4362: Casts from base types to composite types don't work.

"Andrej Podzimek" <andrej@podzimek.org> writes:
> select cast (text 'blabla' as pxgt_xid);
> ERROR: malformed record literal: "blabla"
> DETAIL: Missing left parenthesis.

Works for me, after fixing the obvious typo in your CREATE CAST command:

regression=# CREATE TYPE pxgt_xid AS (xid bigint, xname text);
CREATE TYPE
regression=# CREATE FUNCTION pxgt_xid(text) RETURNS pxgt_xid AS
$BODY$ select cast ( null as bigint ), $1; $BODY$
LANGUAGE 'sql' IMMUTABLE;
CREATE FUNCTION
regression=# CREATE CAST (text AS pxgt_xid) WITH FUNCTION pxgt_xid(text) AS IMPLICIT;
CREATE CAST
regression=# select cast (text 'blabla' as pxgt_xid);
pxgt_xid
-----------
(,blabla)
(1 row)


regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] [Q] DNS(bind) ER model

On Thu, 2008-08-14 at 16:20 -0700, Roderick A. Anderson wrote:
> Anyone aware of an ER model for holding name server records?
>
> Working on the zone file data and I am getting close but keep running
> into the differences between MX records (with a priority) and the others
> that can hold either a domain/sub-domain/host name or an IP address
> depending on whether is an A, TXT, PTR, etc. or a CNAME.

Don't add a column for the prio of the MX record. A lot of designs do
this but IMHO it's crap. If you do it for MX records you also have to do
it for SRV records and who knows what other (future) records).

We (@work) use an in house designed database that use the best technique
I've ever seen.

What it does is use a table to list all the valid RR types(1) along with
an ID and regular expressions that describe what the name and rdata
should look like.

In the table that holds the dns records the type of the record is
foreign key referencing the type table. And insert/update triggers are
used to check that records match the patters in the types table.

With this technique supporting a new record type is as easy as inserting
a new row in the types table. And it also garanties that all records in
the database is at least syntactically correct. (Assuming your patters
are correct of course.)

> Much of the database will be populated and changed automagically so the
> controller for the application will do the right thing but humans will
> get involved every so often. I hope I can get the database to make the
> right thing easy and the wrong thing "impossible" for them.
>
> Any suggestions?

HTH.

Cheers,
Mark.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] How to execute 'set session role' from plpgsql function?

Hi,everybody!

I wish to have a function with code above, but compiller generate
syntactic error at the line "SET SESSION ROLE wishedrole;".

How to pass the wishedrole value to the structure?

CREATE OR REPLACE FUNCTION f_switch_role(INOUT text,INOUT boolean) AS
$BODY$
DECLARE
wishedrole ALIAS FOR $1;
resetrole ALIAS FOR $2;
BEGIN
if resetrole=true then
RESET ROLE;
RETURN;
end if;

>>>>ERROR OCURS AT THE NEXT LINE <<<<<<
SET SESSION ROLE wishedrole;
RETURN;

END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] selecting data from subquery in same order

You may use something like this in a stored function:

DECLARE
    a INTEGER[];
BEGIN
    a := '{2341548, 2325251, 2333130, 2015421,2073536, 2252374, 2273219, 2350850, 2367318, 2032977, 2032849}';
    select * from users where id = any(a) order by idx(a, id);
END;

Or in the plain SQL:

    select * from users where id = any(a) order by idx('{2341548, 2325251, 2333130, 2015421,2073536, 2252374, 2273219, 2350850, 2367318, 2032977, 2032849}', id);

Note that it is pretty fast only if the array contains not too much elements (e.g. 20). Do not use for large arrays!



On Sun, Aug 17, 2008 at 4:11 AM, mark <markkicks@gmail.com> wrote:
hi
if i execute this statement:

select * from users where id in (2341548, 2325251, 2333130, 2015421,
2073536, 2252374, 2273219, 2350850, 2367318, 2032977, 2032849, )

the order of rows obtained is random.

is there anyway i can get the rows in the same order as the ids in
subquery? or is there a different statement i can use?
thanks!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] proposal sql: labeled function params

On Sun, 2008-08-17 at 08:06 +0200, Pavel Stehule wrote:
> 2008/8/16 Decibel! <decibel@decibel.org>:
> > On Aug 15, 2008, at 1:20 PM, Hannu Krosing wrote:
> >>>
> >>> "value AS name", on the other hand, accomplishes the same in a more
> >>> SQL-looking fashion with no new reserved word (since AS is already
> >>> fully reserved).
> >>
> >> would it be more natural / SQL-like to use "value AS name" or "name AS
> >> value" ?
> >
> >
> > IMHO, *natural* would be name *something* value, because that's how every
> > other language I've seen does it.
> >
> > SQL-like would be value AS name, but I'm not a fan of putting the value
> > before the name. And I think value AS name will just lead to a ton of
> > confusion.
> >
> > Since I think it'd be very unusual to do a => (b => c), I'd vote that we
> > just go with =>. Anyone trying to do a => b => c should immediately question
> > if that would work.
>
> I'll look on this syntax - what is really means for implementation. I
> thing, mostly of us prefer this or similar syntax.

Actually the most "natural" syntax to me is just f(name=value) similar
to how UPDATE does it. It has the added benefit of _not_ forcing us to
make a operator reserved (AFAIK "=" can't be used to define new ops)

And I still don't think we need two kinds of names ("argument name" and
"label"). I'd rather see us have the syntax for this be similar to
python's keyword arguments, even though I'm not entirely opposed to
automatically generating the name= part if it comes from existing name
(variable, function argument or column name).

---------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Filesystem benchmarking for pg 8.3.3 server

"Gregory Stark" <stark@enterprisedb.com> writes:

> <david@lang.hm> writes:
>
>>> If you are completely over-writing an entire stripe, there's no reason to
>>> read the existing data; you would just calculate the parity information from
>>> the new data. Any good controller should take that approach.
>>
>> in theory yes, in practice the OS writes usually aren't that large and aligned,
>> and as a result most raid controllers (and software) don't have the
>> special-case code to deal with it.
>
> I'm pretty sure all half-decent controllers and software do actually. This is
> one major reason that large (hopefully battery backed) caches help RAID-5
> disproportionately. The larger the cache the more likely it'll be able to wait
> until the entire raid stripe is replaced avoid having to read in the old
> parity.

Or now that I think about it, replace two or more blocks from the same set of
parity bits. It only has to recalculate the parity bits once for all those
blocks instead of for every single block write.


--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Filesystem benchmarking for pg 8.3.3 server

<david@lang.hm> writes:

>> If you are completely over-writing an entire stripe, there's no reason to
>> read the existing data; you would just calculate the parity information from
>> the new data. Any good controller should take that approach.
>
> in theory yes, in practice the OS writes usually aren't that large and aligned,
> and as a result most raid controllers (and software) don't have the
> special-case code to deal with it.

I'm pretty sure all half-decent controllers and software do actually. This is
one major reason that large (hopefully battery backed) caches help RAID-5
disproportionately. The larger the cache the more likely it'll be able to wait
until the entire raid stripe is replaced avoid having to read in the old
parity.


--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] \copy, transactions and permissions

On Wed, 13 Aug 2008 16:32:18 -0500
ries van Twisk <pg@rvt.dds.nl> wrote:

> On Aug 13, 2008, at 4:25 PM, Ivan Sergio Borgonovo wrote:
> > I need to write an import function with enough isolation from
> > apache daemon.
> > Code has no input other than cvs files and a signal about when to
> > start the import.
> > The sql code that will be executed will be static.
> > I may end up writing a mini-daemon that just start a SQL script
> > or just pool from cron and feed psql.
> >
> > If anyone has a better (lazier, cleaner) approach it will be very
> > welcome.
>
> Lazier would be using JasperETL or any other ETL tool

*I did a simple test and \copy works as expected inside
transactions.*

But I wasn't able to split a \copy command across several lines to
improve readability and add comments.

Such ETL tools didn't seem suited for my current needs.

I downloaded a 190Mb app that has quite a bit of dependencies, tried
to run it and gave up the first time, unpacking took too long. I gave
a glimpse to the screenshot and I got the impression that at least
for me they have a harder learning curve than refreshing my awk
knowledge.

Out of curiosity I googled for etl postgresql and found kettle.
This one was a bit smaller download and I was able to run it
immediately. Still it looked more a tool suited for managers that
have to produce reports rather than for programmers.

They are surely more powerful than a 10 line awk script but I didn't
see a quick way to script them.

eg. if I had to import 10 tables I'd have to repeat the same
click-drag-fill-connect seqence 10 times.

I've used MS DTS in SQL 2000 and I've found it has a faster learning
curve even if it seems its not as flexible as JasperETL or kettle and
I've experienced the same problem of not being able to script it.

kettle export an xml file that could be taken as a template for some
scripting work.
I didn't understand how to execute the resulting plan outside the
GUI.

I'm wondering if I didn't give them enough time or they are
definitively for use case that rarely cross my way.

I haven't used dblink in Postgresql yet but I've found the ability to
connect to different data sources (including Access and Excel) of
those ETL interesting.


--
Ivan Sergio Borgonovo
http://www.webthatworks.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Windows Installation Problem

Stefan Berglund wrote on 16.08.2008 08:30:
> [...]
>
> First I install 8.3.1 using this command line:
> [...]
> Next I do (after a reboot because the service name, service user, super
> user, and super password are not known until after the reboot) an
> install of version 8.3.3 using the following command line:

Maybe I'm missing something, but why don't you install 8.3.3 directly?

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Replay attack of query cancel

Tom Lane wrote:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
>> Andrew Gierth wrote:
>>> 2. The server accepts either the old-style or the secure cancel
>>> request from the client, but doesn't allow old-style requests
>>> once a valid secure request has been seen.
>
>> Hmm, I think there should be a way to turn off acceptance of old-style
>> without necessarily requiring a new-style request. Otherwise, how are
>> you protected from DoS if you have never sent a cancel request at all?
>
> Assuming you were using SSL, it's hard to see how an attacker is going
> to get your cancel key without having seen a cancel request.

Not only that, but he'll have to see an *old-style* cancel request,
since the new style doesn't contain the key.

And if you're *not* using SSL, the attacker can just sniff they key off
the initial packet instead.


> However, I dislike Andrew's proposal above even without that issue,
> because it means *still more* changeable state that has to be magically
> shared between postmaster and backends. If we want to have a way for
> people to disable insecure cancels, we should just have a postmaster
> configuration parameter that does it.

Agreed. Your security policy also should not depend on what your client
happens to do, it should be enforceable.


//Magnus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers