Sunday, September 14, 2008

Re: [GENERAL] Statement level trigger clarification

Chris Velevitch wrote:
> I'm new to triggers and I'm having difficulty in understanding how
> statement level triggers on before updates work.
>
> I have a function that sets new.last_modified := current_timestamp;
>
> If I were to define a trigger as:-
>
> CREATE TRIGGER my_trigger
> BEFORE INSERT OR UPDATE
> ON my_table
> FOR EACH STATEMENT
> EXECUTE PROCEDURE my_function();
>
> and my update statement were to update more than one row, would I be
> correct in understanding that every row the update statement touches
> will have the exact same value for last_modified?

No, this is not going to work at all. NEW and OLD are not available in
statement triggers. So your function will fail.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] How to check if an array is empty

2008/9/11 Tobias Anstett <tobias.anstett@iaas.uni-stuttgart.de>:
> Hi,
>
>
>
> in my special case I'd like to check if a XML[] is empty, because postgres
> doesn't implement any equality operators, this is not a 'simple' task.
>
>
>
> My current solution is:
>
>
>
> SELECT * FROM sometable WHERE XMLSERIALIZE(CONTENT ((XPATH('/a:bar,
> somexmlcolumn, ARRAY[ARRAY['a', 'http://foo]]))[1]) AS text)<>''
>
>
>
> But there must be a better way – maybe I missed something important - I also
> tried queries like the following:
>
>
>
> SELECT * FROM sometable WHERE (XPATH('/a:bar', somexmlcolumn,
> ARRAY[ARRAY['a', 'http://foo']]))<>('{}'::XML[])
>
>
>
> Any ideas and feedback are welcome.
>
>
>
> Cheers, Tobias

What about array_upper():

select array_upper('{}'::XML[], 1)

NULL will be returned if the array is empty in 1-dim.

--
Hitoshi Harada

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Statement level trigger clarification

I'm new to triggers and I'm having difficulty in understanding how
statement level triggers on before updates work.

I have a function that sets new.last_modified := current_timestamp;

If I were to define a trigger as:-

CREATE TRIGGER my_trigger
BEFORE INSERT OR UPDATE
ON my_table
FOR EACH STATEMENT
EXECUTE PROCEDURE my_function();

and my update statement were to update more than one row, would I be
correct in understanding that every row the update statement touches
will have the exact same value for last_modified?


Chris
--
Chris Velevitch
Manager - Adobe Platform Users Group, Sydney
m: 0415 469 095
www.apugs.org.au

Adobe Platform Users Group, Sydney
September meeting: It's Going To Be Brilliant
Date: Mon 29th September 6pm for 6:30 start
Details and RSVP on http://apugs2008september.eventbrite.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] [pgadmin-support] PostgreSQL

Tino Wildenhain wrote:

>> [1] Will PostgreSQL perform better on Linux or Windows OS
>
> The performance is better on Linux, last but not least because
> it is developed and thus optimized there much longer then it
> is natively available for windows. Also system management is a lot
> more easy on unix like systems.

There are also a lot more PostgreSQL developers and users interested in
UNIX systems. While there appear to be lots of PostgreSQL users on
Windows now they don't seem to be very frequent contributors,
particularly active on the mailing lists, etc.

That's going to reduce the rate of PostgreSQL's improvement on Windows,
especially given how generally unpleasant the platform is to develop on*.

>> [2] What is the best RAID configuration/number of dives you recommend
>
> You should ask the vendor of your web booking engine, it really depends
> much on the nature of the transactions. Also have a look in the archive
> of the "postgresql general" - list, which is also much more appropriate
> then this list here.

The appropraite setup also depends on your controller's performance
characteristics, your particular workload, etc. If things are busy
enough you might even want to look at using tablespaces to split busy
tables on to separate volumes.

* Except Visual Studio's debugger, which I use a lot when I encounter
wacky problems in C++ code. gdb is OK for C, but awful for C++. It often
seems to be necessary to have two systems - a Linux one for valgrind and
everything else, plus a Windows one for VC++'s debugger.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] plpgsql return select from multiple tables

Artis Caune wrote:

> 'IF FOUND THEN' or 'IF variable = X THEN' features are only in plpgsql, i think.

You can frequently achieve similar effects with COALESCE and/or CASE
expressions in normal SQL. This can be a LOT faster.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Out of memory on SELECT (from sort?) in 8.3

"Matt Magoffin" <postgresql.org@msqr.us> writes:
> I have a SELECT query that causes an out-of-memory error on my production
> Postgres 8.3 server.

8.3.which, and what exactly is the query and its EXPLAIN plan?

> I believe the culprit is from the sort in the query,

No, the problem seems to be here

> ExecutorState: 841031232 total in 51159 blocks; 1712232 free (56
> chunks); 839319000 used

and it's impossible to guess what that's about without a lot more
details than you provided. The sort's only eating 28MB:

> TupleSort: 28303408 total in 13 blocks; 5346944 free (14 chunks);
> 22956464 used

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Out of memory on SELECT (from sort?) in 8.3

I have a SELECT query that causes an out-of-memory error on my production
Postgres 8.3 server. I believe the culprit is from the sort in the query,
but its running out on queries that return 300,000 results which I didn't
think was very large.

work_mem is set to 128MB, but I thought the server would change to an
on-disk sort method if it couldn't perform the sort in memory. Is this not
the case?

In the log file, I see the following information:

2008-08-16 02:02:43 CDT lms_nna ERROR: out of memory
2008-08-16 02:02:43 CDT lms_nna DETAIL: Failed on request of size 16384.
TopMemoryContext: 105376 total in 12 blocks; 9288 free (10 chunks); 96088
used
LibxmlContext: 8380416 total in 10 blocks; 3828016 free (0 chunks);
4552400 used
TopTransactionContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544
used
PL/PgSQL function context: 8192 total in 1 blocks; 5512 free (4 chunks);
2680 used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
PL/PgSQL function context: 8192 total in 1 blocks; 4648 free (7 chunks);
3544 used
CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks);
6512 used
PLpgSQL function cache: 24224 total in 2 blocks; 3744 free (0 chunks);
20480 used
Record information cache: 24576 total in 2 blocks; 15984 free (5
chunks); 8592 used
Prepared Queries: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
MessageContext: 8192 total in 1 blocks; 6976 free (1 chunks); 1216 used
smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks);
10672 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 8192 total in 1 blocks; 7616 free (0 chunks); 576 used
PortalHeapMemory: 2048 total in 1 blocks; 424 free (0 chunks); 1624 used
ExecutorState: 841031232 total in 51159 blocks; 1712232 free (56
chunks); 839319000 used
TIDBitmap: 2088960 total in 8 blocks; 243200 free (25 chunks);
1845760 used
TupleSort: 28303408 total in 13 blocks; 5346944 free (14 chunks);
22956464 used
ExprContext: 8192 total in 1 blocks; 8096 free (0 chunks); 96 used
ExprContext: 8192 total in 1 blocks; 8096 free (0 chunks); 96 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
PortalHeapMemory: 1024 total in 1 blocks; 944 free (0 chunks); 80 used
Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used
CacheMemoryContext: 817392 total in 20 blocks; 23488 free (1 chunks);
793904 used
CachedPlan: 15360 total in 4 blocks; 1528 free (0 chunks); 13832 used
CachedPlanSource: 15360 total in 4 blocks; 1440 free (0 chunks); 13920
used
unnamed prepared statement: 8192 total in 1 blocks; 3856 free (2
chunks); 4336 used
CachedPlan: 1024 total in 1 blocks; 312 free (0 chunks); 712 used
CachedPlanSource: 1024 total in 1 blocks; 56 free (0 chunks); 968 used
SPI Plan: 1024 total in 1 blocks; 832 free (0 chunks); 192 used
CachedPlan: 1024 total in 1 blocks; 336 free (0 chunks); 688 used
CachedPlanSource: 1024 total in 1 blocks; 80 free (0 chunks); 944 used
SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used
CachedPlan: 3072 total in 2 blocks; 488 free (0 chunks); 2584 used
CachedPlanSource: 1024 total in 1 blocks; 24 free (0 chunks); 1000 used
SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used
CachedPlan: 1024 total in 1 blocks; 816 free (0 chunks); 208 used
CachedPlanSource: 1024 total in 1 blocks; 656 free (0 chunks); 368 used
pg_toast_18005_index: 2048 total in 1 blocks; 608 free (0 chunks);
1440 used
reporting_modified_idx: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
reporting_last_processed_date_idx: 2048 total in 1 blocks; 752 free (0
chunks); 1296 used
reporting_date_idx: 2048 total in 1 blocks; 704 free (0 chunks); 1344
used
lead_created_idx: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
lead_processing_step_idx: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
lead_destination_idx: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
lead_modified_idx: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
lead_source_idx: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
processing_state_idx: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
lead_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
CachedPlan: 1024 total in 1 blocks; 816 free (0 chunks); 208 used
CachedPlanSource: 1024 total in 1 blocks; 664 free (1 chunks); 360 used
CachedPlan: 1024 total in 1 blocks; 312 free (0 chunks); 712 used
CachedPlanSource: 1024 total in 1 blocks; 56 free (0 chunks); 968 used
SPI Plan: 1024 total in 1 blocks; 832 free (0 chunks); 192 used
locks_pkey: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used
CachedPlan: 3072 total in 2 blocks; 648 free (0 chunks); 2424 used
CachedPlanSource: 3072 total in 2 blocks; 1248 free (1 chunks); 1824 used
SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used
CachedPlan: 1024 total in 1 blocks; 816 free (0 chunks); 208 used
CachedPlanSource: 1024 total in 1 blocks; 664 free (1 chunks); 360 used
xslt_style_sheet_pkey: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
external_system_user_name_key: 2048 total in 1 blocks; 752 free (0
chunks); 1296 used
external_system_name_key: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
external_system_pkey: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
dealer_external_system_id_idx: 2048 total in 1 blocks; 752 free (0
chunks); 1296 used
dealer_dealer_code_key: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
dealer_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
pg_index_indrelid_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_ts_dict_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_aggregate_fnoid_index: 3072 total in 2 blocks; 1744 free (3
chunks); 1328 used
pg_language_name_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_statistic_relid_att_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_ts_dict_dictname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_namespace_nspname_index: 3072 total in 2 blocks; 1696 free (2
chunks); 1376 used
pg_opfamily_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_opclass_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_ts_parser_prsname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_amop_fam_strat_index: 3072 total in 2 blocks; 1384 free (2 chunks);
1688 used
pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1576 free (3
chunks); 1496 used
pg_trigger_tgrelid_tgname_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_cast_source_target_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_auth_members_role_member_index: 3072 total in 2 blocks; 1648 free
(2 chunks); 1424 used
pg_attribute_relid_attnum_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_ts_config_cfgname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_authid_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_ts_config_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_conversion_default_index: 3072 total in 2 blocks; 1432 free (3
chunks); 1640 used
pg_language_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_enum_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328
used
pg_proc_proname_args_nsp_index: 3072 total in 2 blocks; 1576 free (3
chunks); 1496 used
pg_ts_parser_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_database_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_conversion_name_nsp_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_class_relname_nsp_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_attribute_relid_attnam_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_class_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_amproc_fam_proc_index: 3072 total in 2 blocks; 1384 free (2
chunks); 1688 used
pg_operator_oprname_l_r_n_index: 3072 total in 2 blocks; 1384 free (2
chunks); 1688 used
pg_index_indexrelid_index: 3072 total in 2 blocks; 1696 free (2
chunks); 1376 used
pg_type_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_rewrite_rel_rulename_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_authid_rolname_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_auth_members_member_role_index: 3072 total in 2 blocks; 1648 free
(2 chunks); 1424 used
pg_enum_typid_label_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_constraint_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_conversion_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_ts_template_tmplname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_ts_config_map_index: 3072 total in 2 blocks; 1624 free (3 chunks);
1448 used
pg_namespace_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_type_typname_nsp_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_operator_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_amop_opr_fam_index: 3072 total in 2 blocks; 1600 free (2 chunks);
1472 used
pg_proc_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_opfamily_am_name_nsp_index: 3072 total in 2 blocks; 1624 free (3
chunks); 1448 used
pg_ts_template_oid_index: 3072 total in 2 blocks; 1744 free (3
chunks); 1328 used
MdSmgr: 8192 total in 1 blocks; 7104 free (0 chunks); 1088 used
LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
ErrorContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[NOVICE] GRANT question

I am a user, len, and I own a schema, called fec. cs386 is another user.

In psql 8.3.1 (server 8.3.3) I ran a shell script that included these statements


**BEGIN EXCERPTS FROM SHELL SCRIPT

-- I seem to need to say this in every shell script, it doesn't 'stick'
SET search_path TO fec;

....

SELECT fecid, c.commid, occup, month, amount
INTO indivcl
FROM indiv i, comm c WHERE i.commid = c.commid and i.zip = '97223' and
year = 2008
and month < 5 and c.assoccand in ('P80003338', 'P80002801' ,
'S6OR00094' , 'S8OR00207')
;

...

GRANT SELECT ON candcl, commcl, indivcl, pascl TO cs386;

** END EXCERPTS FROM SHELL SCRIPT

When the user cs386 logs in via the GUI interface phpPgAdmin, s/he
cannot SELECT from indivcl. Then len logs in via that GUI interface,
issues the statement
GRANT SELECT ON indivcl TO cs386;
and then cs386 can access indivcl .

What is wrong? Why did the effect of the "GRANT SELECT ..." statement
in the shell script not persist? I'd appreciate any advice.

Len Shapiro

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [PERFORM] Choosing a filesystem

On Sat, Sep 13, 2008 at 5:26 PM, <david@lang.hm> wrote:
> On Fri, 12 Sep 2008, Merlin Moncure wrote:
>>
>> While this is correct, if heavy writing is sustained, especially on
>> large databases, you will eventually outrun the write cache on the
>> controller and things will start to degrade towards the slow case. So
>> it's fairer to say that caching raid controllers burst up to several
>> thousand per second, with a sustained write rate somewhat better than
>> write-through but much worse than the burst rate.
>>
>> How fast things degrade from the burst rate depends on certain
>> factors...how big the database is relative to the o/s read cache in
>> the controller write cache, and how random the i/o is generally. One
>> thing raid controllers are great at is smoothing bursty i/o during
>> checkpoints for example.
>>
>> Unfortunately when you outrun cache on raid controllers the behavior
>> is not always very pleasant...in at least one case I've experienced
>> (perc 5/i) when the cache fills up the card decides to clear it before
>> continuing. This means that if fsync is on, you get unpredictable
>> random freezing pauses while the cache is clearing.
>
> although for postgres the thing that you are doing the fsync on is the WAL
> log file. that is a single (usually) contiguous file. As such it is very
> efficiant to write large chunks of it. so while you will degrade from the
> battery-only mode, the fact that the controller can flush many requests
> worth of writes out to the WAL log at once while you fill the cache with
> them one at a time is still a significant win.

The heap files have to be synced as well during checkpoints, etc.

merlin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PATCHES] hash index improving v3

I did some testing of my own on the hash index patch, and mostly seem to
have confirmed Alex's results. I used a table like this:

create table tab (f1 serial primary key, f2 some-datatype);
create index ind on tab using hash (f2);

and populated it with 2 million rows of data; then timed queries
like this:

select * from tab a join tab b using(f2)
where f2 = (select f2 from tab c
where c.f1 = (select int4(random() * 2e6)));

using pgbench like this:

pgbench -n -c 1 -T 60 -M prepared -f query.sql hashdb

To test "wide" indexed columns I used a text column with entries of 100
random characters (identical to Alex's test). I saw a performance gain
of about 50% with an empty kernel cache (26.9 vs 41.9 tps), dropping to
about 14% once the table and index were fully swapped in (4185 vs 4764
tps). This was in a C-locale database. Presumably the win would have
been significantly greater in a non-C-locale test, but I didn't try it.

To test "narrow" indexed columns I made f2 a bigint containing 2000000
consecutive integers. Here I saw about a 5% improvement with either
empty cache (48.1 vs 50.5 tps) or full cache (4590 vs 4800 tps).
This is not too surprising since about the same amount of I/O is needed
either way, and bigint comparison is very cheap. (This is a 64-bit
machine, and it's defaulting to pass-by-value for bigints, so value
comparisons are hardly more expensive than hashcode comparisons.)
But the patch still wins a bit by being able to do binary search within
index pages.

In both of the above cases there were only a negligible number of hash
collisions. I made up another test case, still 2 million bigints, but
with values chosen so that almost every entry had a hash collision with
another entry (a small fraction had two or even 3 collisions). This
showed about a 25% slowdown compared to CVS HEAD with empty cache
(49.9 tps down to 37.2), decreasing to 3% with full cache (4609 vs 4482
tps).

I experimented with some variant queries that did more hash index
fetches per query, and saw penalties as high as 50%. However, this is
surely by far the worst case for the patch: no savings in index size,
and a ridiculously high collision rate.

Lastly, for comparison's sake I tried the "wide column" case with a
btree instead of hash index. This gave me 31.5 tps with empty cache,
4749 tps with full cache. Note that the btree is losing significantly
to the patched hash index in empty-cache conditions --- this presumably
reflects the much larger index size causing more I/O.

I'm thinking that we should go ahead and apply the patch. AFAIR this is
the first example we've ever seen of hash beating btree for fetch
performance, and it's pretty obvious that removing the indexed value
from the index contents is the reason for the win. So I've lost
interest in the alternative that involved storing both hashcode and
indexed value. We now see a possible production use-case for hash,
namely indexing wide column values.

BTW, one thing I noticed was that the hash index build time for the
"wide column" case got a lot worse after applying the patch (from 56 to
237 sec). The reason for this turned out to be that with the smaller
predicted index size, the code decided not to use the pre-sorting method
that was recently added. Reducing effective_cache_size to less than the
index size brought the time back down, to about 54 sec. So it would
seem that effective_cache_size is too large a cutoff value. I'm
considering changing hashbuild to switch over at shared_buffers instead
of effective_cache_size --- any thoughts about that?

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

[ANNOUNCE] == PostgreSQL Weekly News - September 14 2008 ==

== PostgreSQL Weekly News - September 14 2008 ==

CommitFest Update: thanks to more than half a dozen new reviewers
volunteering, almost all patches have been assigned reviewers. This
commitfest might actually finish in a week. Patches committed this
week include, to_date format validaton, GUC setting source display,
Boyer-Moore string searching, Command-line function definition
retrieval, Code coverage of regression tests, New permission:
Truncate, and Improvements in pg_bench, pg_dumpall, psql, PITR and
plan invalidation. Several patches have been sent back for more work,
including Windowing Functions, PL/Proxy (not going to make 8.4,
unfortunately), and Grant Insert on sequences.

Francisco J. Morosini of the Peruvian PostgreSQL group has posted
Podcast No. 3 with David Fetter, PostgreSQL, DBI-Link, PL/Perl,
experiences with PostgreSQL and discussions on the community and how
to welcome new people.
http://www.postgresql.org.pe/?q=node/24

PgUS is now excepting memberships and nominations to board.
https://www.postgresql.us/node/43

The Russian PostgreSQL community calendar is at
http://www.google.com/calendar/embed?src=4af2not88un2arkgnhhho2h7lk@group.calendar.google.com&ctz=Europe/Moscow

== PostgreSQL Product News ==

Slony-I 1.2.15 released.

== PostgreSQL Jobs for September ==

http://archives.postgresql.org/pgsql-jobs/2008-09/threads.php

== PostgreSQL Local ==

The Russian PostgreSQL community will meet September 16 in Moscow.
http://postgresmen.ru/news/view/113

Selena Deckelmann and Gabrielle Roth will be giving a presentation on
filesystem performance for PostgreSQL at the Linux Plumbers'
Conference.
http://linuxplumbersconf.org/program/speakers/getspeaker.php?speaker=mwong.txt

PDXPUG will meet September 18. Tom Raney will talk about his Visual
Planner tool.
http://pugs.postgresql.org/node/468

The Prato Linux User Group is having PostgreSQL talks in September.
Schedule:
http://www.prato.linux.it/serate_a_tema_2008

PGCon Brazil 2008 will be on September 26-27 at Unicamp in Campinas.
http://pgcon.postgresql.org.br/index.en.html
Registration:
http://pgcon.postgresql.org.br/inscricoes.en.html

PgDay.fr will be October 4 in Toulouse. The Call for Papers is open:
http://www.postgresqlfr.org/?q=node/1686
Registration:
http://www.pgday.fr/doku.php/inscription

The Highload++ conference will be October 6-8 in Moscow, Russia.
Gavin Roy, Asko Oja and Maxim Boguk will talk about things PostgreSQL.
http://highload.ru

PostgreSQL Conference West 2008 will be October 10-12 at Portland
State University in Portland, Oregon.
http://www.postgresqlconference.org/
PostgreSQL Conference West (October 10-12) is now accepting registrations.
http://www.postgresqlconference.org/west08/register

PGDay.(IT|EU) 2008 will be October 17 and 18 in Prato. Registration
will open soon.
http://www.pgday.org/en/

PostgreSQL has a table at LinuxLive, Olympia, London, UK on 23-25
October, 2008. Write to Dave Page to participate.
dpage AT pgamin DOT org

== PostgreSQL in the News ==

Planet PostgreSQL: http://www.planetpostgresql.org/

PostgreSQL Weekly News is brought to you this week by David Fetter,
Josh Berkus, Michael Brewer, Joshua Drake, Devrim GUNDUZ, Francisco J.
Morosini, and Nikolay Samokhvalov.

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david@fetter.org, German language
to pwn@pgug.de, Italian language to pwn@itpug.org.

== Applied Patches ==

Tom Lane committed:

- Make pg_dump --data-only try to order the table dumps so that
foreign keys' referenced tables are dumped before the referencing
tables. This avoids failures when the data is loaded with the FK
constraints already active. If no such ordering is possible because
of circular or self-referential constraints, print a NOTICE to warn
the user about it.

- Fix a couple of problems pointed out by Fujii Masao in the
2008-Apr-05 patch for pg_stop_backup. First, it is possible that
the history file name is not alphabetically later than the last WAL
file name, so we should explicitly check that both have been
archived. Second, the previous coding would wait forever if a
checkpoint had managed to remove the WAL file before we look for it.
Simon Riggs, plus some code cleanup by me.

- Improve the plan cache invalidation mechanism to make it invalidate
plans when user-defined functions used in a plan are modified. Also
invalidate plans when schemas, operators, or operator classes are
modified; but for these cases we just invalidate everything rather
than tracking exact dependencies, since these types of objects
seldom change in a production database. Tom Lane; loosely based on
a patch by Martin Pihlak.

- In pgsql/src/pl/plpgsql/src/gram.y, fix a couple of places where the
plpgsql grammar would produce an unhelpful 'syntax error' message,
rather than something that might draw one's attention to a missing
or wrong-type variable declaration. Per recent gripe.

- In pgsql/src/interfaces/libpq/fe-exec.c, avoid using sprintf() for a
simple octal conversion in PQescapeByteaInternal. Improves
performance, per suggestion from Rudolf Leitgeb (bug #4414). The
backend did this right already, but not libpq.

- Make our parsing of INTERVAL literals spec-compliant (or at least a
heck of a lot closer than it was before). To do this, tweak
coerce_type() to pass through the typmod information when invoking
interval_in() on an UNKNOWN constant; then fix DecodeInterval to pay
attention to the typmod when deciding how to interpret a units-less
integer value. I changed one or two other details as well. I
believe the code now reacts as expected by spec for all the literal
syntaxes that are specifically enumerated in the spec. There are
corner cases involving strings that don't exactly match the set of
fields called out by the typmod, for which we might want to tweak
the behavior some more; but I think this is an area of user
friendliness rather than spec compliance. There remain some
non-compliant details about the SQL syntax (as opposed to what's
inside the literal string); but at least we'll throw error rather
than silently doing the wrong thing in those cases.

- In pgsql/src/backend/utils/misc/guc.c, tweak newly added
set_config_sourcefile() so that the target record isn't left corrupt
if guc_strdup should fail.

- Adjust the parser to accept the typename syntax INTERVAL ...
SECOND(n) and the literal syntax INTERVAL 'string' ... SECOND(n), as
required by the SQL standard. Our old syntax put (n) directly after
INTERVAL, which was a mistake, but will still be accepted for
backward compatibility as well as symmetry with the TIMESTAMP cases.
Change intervaltypmodout to show it in the spec's way, too. (This
could potentially affect clients, if there are any that analyze the
typmod of an INTERVAL in any detail.) Also fix interval input to
handle 'min:sec.frac' properly; I had overlooked this case in my
previous patch. Document the use of the interval fields qualifier,
which up to now we had never mentioned in the docs. (I think the
omission was intentional because it didn't work per spec; but it
does now, or at least close enough to be credible.)

- Tighten up to_date/to_timestamp so that they are more likely to
reject erroneous input, rather than silently producing bizarre
results as formerly happened. Brendan Jurd

- Add a duration option to pgbench, so that test length can be
specified in seconds instead of by number of transactions to run.
Takahiro Itagaki

- In pgsql/src/backend/optimizer/path/indxpath.c, skip opfamily check
in eclass_matches_any_index() when the index isn't a btree. We
can't easily tell whether clauses generated from the equivalence
class could be used with such an index, so just assume that they
might be. This bit of over-optimization prevented use of non-btree
indexes for nestloop inner indexscans, in any case where the join
uses an equality operator that is also a btree operator --- which in
particular is typically true for hash indexes. Noted while trying
to test the current hash index patch.

- In pgsql/doc/src/sgml/citext.sgml, update citext's documentation to
match the recently-applied patch, per David Wheeler.

Alvaro Herrera committed:

- In pgsql/src/pl/plpgsql/src/pl_exec.c, improve plpgsql's ability to
report tuple incompatibility problems. Volkan YAZICI.

- Add "source file" and "source line" information to each GUC
variable. initdb forced due to changes in the pg_settings view.
Magnus Hagander and Alvaro Herrera.

- Initialize the minimum frozen Xid in vac_update_datfrozenxid using
GetOldestXmin() instead of RecentGlobalXmin; this is safer because
we do not depend on the latter being correctly set elsewhere, and
while it is more expensive, this code path is not
performance-critical. This is a real risk for autovacuum, because
it can execute whole cycles without doing a single vacuum, which
would mean that RecentGlobalXmin would stay at its initialization
value, FirstNormalTransactionId, causing a bogus value to be
inserted in pg_database. This bug could explain some recent reports
of failure to truncate pg_clog. At the same time, change the
initialization of RecentGlobalXmin to InvalidTransactionId, and
ensure that it's set to something else whenever it's going to be
used. Using it as FirstNormalTransactionId in HOT page pruning
could incur in data loss. InitPostgres takes care of setting it to
a valid value, but the extra checks are there to prevent "special"
backends from behaving in unusual ways. Per Tom Lane's detailed
problem dissection in 29544.1221061979@sss.pgh.pa.us

Peter Eisentraut committed:

- In pgsql/doc/src/sgml/docguide.sgml, update man page build
instructions.

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

Brendan Jurd sent in another revision of his patch to add validation
to to_date().

Alvaro Herrera sent in a patch to make GUC source file and line number
visible.

Tatsuo Ishii sent in another revision of the Common Table Expressions
patch.

Tom Lane sent in a patch to make SQL interval literal syntax conform
to SQL:2008

Heikki Linnakangas sent in another revision of his FSM patch.

Ramon Lawrence sent in two revisions of a patch to disable the
physical-tlist optimization for hash join if the number of batches is
greater than 1.

Simon Riggs sent in another revision of his recovery infrastructure
patch.

David Wheeler sent in two revisions of a patch which adds support for
char to case-insensitive text.

Heikki Linnakangas sent in a set of performance tests for his FSM
patch.

KaiGai Kohei sent in another set of patches for SE-PostgreSQL.

Asif Naeem sent in a patch which correctly copies plugins to the
lib/plugins directory.


---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

[SQL] a simple transform

I've thrashed at this transform for quite a while and come up empty. The
crosstab() functions, and the documented examples, all do something more
complex than I need. I can do this after the fact trivially in python with
the 'zip()' function, but I need it real-time from a view or function.

Of course I could be overlooking the obvious (it's happened before!)

I have this:
attr1 attr2 attr3
------ ------ ------
cat1 0.34 0.12 0.22
cat2 0.11 0.67 0.31

and need it transposed thus:
cat1 cat2
----- -----
attr1 0.34 0.11
attr2 0.12 0.67
attr3 0.22 0.31

in case it adds motivation (;^/), cat1 and cat2 are democrat & republican;
the attr's are campaign issues; the values are "aboutness" derived from a
text similarity measure

tia
Karl


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] [Review] Tests citext casts by David Wheeler.

On Sep 12, 2008, at 12:49, Alvaro Herrera wrote:

>> Looks like the IO conversions handle char and "char", so the attached
>> patch just updates the regression test.
>
> There are unresolved conflicts in the patch ...

Bah! Sorry. Let me try that again.

Best,

David

Re: [pgeu-general] LinuxLive UK

On Fri, Sep 05, 2008 at 11:11:51AM +0100, Dave Page wrote:
> Cool thanks - I think we will have enough hands in this case - though
> I'd certainly like to see more.

Hey, stop complaining! Over here in .nl we hardly ever get even one
person to commit to man a booth somewhere!

:)

Gr,

Koen

--
K.F.J. Martens, Sonologic, http://www.sonologic.nl/
Networking, hosting, embedded systems, unix, artificial intelligence.
Public PGP key: http://www.metro.cx/pubkey-gmc.asc
Wondering about the funny attachment your mail program
can't read? Visit http://www.openpgp.org/

Re: [pgsql-www] Mailserver/Majordomo problem

Andreas 'ads' Scherbaum wrote:
>
> Hello,
>
> On Fri, 12 Sep 2008 23:41:02 -0300 Marc G. Fournier wrote:
>
> > I don't know of any problems ... what URL?
>
> As i said: tried to sbscribe to committers list and clicked the
> "subscribe" button. After several minutes i got the 500 error.
>
> http://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org?func=lists-long-full&extra=pgsql-committers

It works for me.


--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgus-general] PgUS Memberships and Board Nominations Now Open

-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkjNOBEACgkQvJuQZxSWSshx2gCeNlyqNmNf6ZXrRTsmXNVQF/iK
lxEAn0aP89PkUcb6jOGu/Vgaq85XVCXX
=ClTN
-----END PGP SIGNATURE-----
The OpenID login seems not to work, once you login via OpenID, you get
returned to the site and an "Invalid CAPTCHA token." error pops up,
despite their not being a captcha on the openid login page. Anyone found a
workaround?

While I'm here, a few other things:

> https://www.postgresql.us/join

501c3 on should be written as 501(c)(3).

The main page should change "is non profit" to "is a non profit
organization"

The main page invites people to view the bylaws, but a "permission denied"
error appears when you try to do so. If a login is required to view the
bylaws (which seems a bad idea), the pages should state so.

The banner at the top of the site still reads "The world's most advanced
open source database" which doesn't make sense as the left-hand side of
the banner is the organization, not the database.


--
Greg Sabino Mullane greg@endpoint.com
End Point Corporation

Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

On Sep 13, 2008, at 4:39 AM, Tomasz Ostrowski wrote:

> On 2008-09-12 15:52, Jack Orenstein wrote:
>
>> Sorry, I misspoke. I have an index, but preferred doing a scan
>> without the index in this case.
>
> Why?
>
> The only reason I can think of is that you'd like to avoid disk
> seeking. But you get at most 1 row in 30 seconds, so disk latency
> (only several milliseconds) can be ignored.
>

Because other parts of our application consume results from the same
query at normal speed.

Jack

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Restore filesystem backup

Patrik Strömstedt wrote:
> Dear all,
>
> Problem solved.

> 6) Cross your fingers... (In my case, the old data was available)

7) Immediately take a new pg_dump backup
8) Initiate a new data directory
9) Restore from brand new pg_dump backup
10) Immediately make sure you are getting real, consistent, backups :)

Sincerely,

Joshua D. Drake

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Restore filesystem backup

Dear all,

Problem solved.

Small "howto" (if someone will end up with the same problem)...

This is all on a "local" system (not on the customers machine).

1) Download and install the "same" PostgreSQL version as the one on the
filesystem dump.
(In my case I had 8.0.1 and I downloaded and installed 8.0.15)

2) Stop the PostgreSQL service

3) Copy all files from the data directory of the filesystem dump, to the
data directory on the newsly installed system.

4) Change the owner of the files/directories to the postgres user (* THIS IS
CRUCIAL *)

5) Start the PostgreSQL service

6) Cross your fingers... (In my case, the old data was available)

Patrik Strömstedt

----- Original Message -----
From: "Robert Treat" <xzilla@users.sourceforge.net>
To: <pgsql-general@postgresql.org>
Cc: "Patrik Strömstedt" <patrik@digstud.se>
Sent: Saturday, September 13, 2008 10:04 PM
Subject: Re: [GENERAL] Restore filesystem backup


On Saturday 13 September 2008 09:07:23 Patrik Strömstedt wrote:
> Hi,
>
> I have a big problem.
>
> The backup (done nightly with pg_dump) at one of our customers sites is
> broken (well, it's overwritten and is of no use anymore). What is left is
> a
> filesystem backup that incudes the postgresql directories.
>
> I'm trying to restore one of the tables from this "filesystem backup"
> (employee_pass (salaries..)), that has been deleted on the live system.
>
> How (if possible), can I use this filesystem backup to restore the
> database
> into a working setup (off-site), from where I can retreive data from the
> one table (employee_pass).
>
> This is on Windows 2000 Server, the Postgres version is 8.01 (I know, it's
> old...)
>

Honestly you have a mess on your hands on a number of different levels. I'd
be
tempted to just try copying the tables files directly between the two
directories if you can matchup the files on disk correctly. Otherwise you
might be forced to try and get some filesystem level tools going, but I'm
not
sure how feasible that is on windows, especially on such an old version.
Good luck.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] Download links

Dave Page wrote:
> On Mon, Sep 1, 2008 at 10:13 PM, Chander Ganesan <chander@otg-nc.com> wrote:
>
>
>> That being said, the issue that I see with EDB hosting is that they now have
>> access to information about who is downloading from where, information that
>> can be used to determine where they should advertise more heavily, what
>> customers might be investigating PostgreSQL, and even geographical areas
>> they should focus their marketing and product offerings.
>>
>
> The same information that the hosts of the Yum repo, and other
> distributions have. That is why it was decided that EDB would host the
> downloads (by PG people, not EDB), despite the fact that I originally
> wanted to do it through postgresql.org
>
So I just downloaded PostgreSQL plus, and noticed that EDB has an
"information collection" page that asks for a wide range of information
from the user. While this is a "permissive" form (i.e., you need not
fill it out when it is downloaded), it is a form that collects user
information that wouldn't be available elsewhere, and provides a
significant competitive advantage to EDB... It's very targeted
information that isn't generally available elsewhere.

This is exactly the sort of thing that I'm concerned about. Now
company, Y, Z, etc. can do the same thing, or even OTG can re-package
PostgreSQL and perform the same set of tasks.
>
>> Information that
>> isn't available to the PostgreSQL community-at-large, and information that
>> would likely be useful for lots of other PG related companies, such as OTG,
>> CMD, 2ndQuadrant and others. Is there even a policy with regard to what
>> they do with this information? Does the community provide this information
>> to the public-at-large (I suspect not, for privacy reasons)?
>>
>
> Yes, there is an unwritten policy. It basically says that if you're a
> listed corporate sponsor, you can have one-off traffic reports from
> time to time, subject to the availability of someone on the webteam to
> produce what you need. That will generally give far more info than EDB
> gets from installer downloads.
>
I'm sure I'll ask for this before too long :-) Unfortunately, its far
less than what EDB gets through their permissive data collection mechanism.

--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC 27560
919-463-0999/877-258-8987
http://www.otg-nc.com


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [SQL] selecting current UTC time

davidn@xnet.co.nz wrote:
> Hi,
>
> Is it possible to select the current UTC time as type timestamp with time
> zone?
>
> select localtimestamp;
>
> 2008-09-14 21:55:24.367796
>
> select localtimestamp at time zone 'UTC';
>
> 2008-09-15 09:55:42.3478+12
> (not sure if what this result is)
>
> select current_timestamp;
>
> 2008-09-14 21:55:57.844422+12
>
> select current_timestamp at time zone 'UTC';
>
> 2008-09-14 09:56:07.661157
> (is type timestamp without time zone)
>
> I know this seems like a simple problem but I have searched the archives
> and am unable to find a solution.
>
> Any ideas?

default=> select localtimestamp, current_timestamp;
timestamp | now
----------------------------+-------------------------------
2008-09-14 08:54:19.488303 | 2008-09-14 08:54:19.488303-04
(1 row)

default=> set time zone utc;
SET
default=> select localtimestamp, current_timestamp;
timestamp | now
----------------------------+-------------------------------
2008-09-14 12:54:23.768571 | 2008-09-14 12:54:23.768571+00
(1 row)


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[SQL] selecting current UTC time

Hi,

Is it possible to select the current UTC time as type timestamp with time
zone?

select localtimestamp;

2008-09-14 21:55:24.367796

select localtimestamp at time zone 'UTC';

2008-09-15 09:55:42.3478+12
(not sure if what this result is)

select current_timestamp;

2008-09-14 21:55:57.844422+12

select current_timestamp at time zone 'UTC';

2008-09-14 09:56:07.661157
(is type timestamp without time zone)

I know this seems like a simple problem but I have searched the archives
and am unable to find a solution.

Any ideas?

Thanks.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [DOCS] Russian FAQ page charset problem

Nikolay, by history reasons, Russian FAQ translation come in KOI8-R encoding.
But, PostgreSQL site has UTF-8.

2Bruce: I can convert Russian translation into UTF-8 and correct HTML header.
What you think about it?

2008/9/12 Nikolay Samokhvalov <samokhvalov@gmail.com>:
> Current page with FAQ in Russian
> (http://www.postgresql.org/docs/faqs.FAQ_russian.html) has charset
> problem: the text is in koi-8r, while HTML source has this line:
> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
>
> It'd be great to fix it: either to change meta line or convert
> contents to UTF8 (I personally would choose the latter).
>
> --
> Sincerely yours,
> Nikolay Samokhvalov
> Postgresmen LLC, http://postgresmen.ru
>
> --
> Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-docs
>

--
Sent via pgsql-docs mailing list (pgsql-docs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

[pgsql-ru-general] Re: Приглашение на PostgreSQL-встречи в сентябре и октябре

Николай, а материалы докладов появятся где-либо в Инете в виде документации?

12 сентября 2008 г. 19:54 пользователь Nikolay Samokhvalov
<ns@postgresmen.ru> написал:
> Виктор, здравствуйте!
>
> Приглашаем на интересные Постгрес-встречи в сентябре и октябре,
> информация здесь: http://forum.postgresmen.ru/viewtopic.php?f=6&t=24
>
> --
> Sincerely yours,
> Nikolay Samokhvalov
> Postgresmen LLC, http://postgresmen.ru
>

--
Sent via pgsql-ru-general mailing list (pgsql-ru-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-ru-general

Re: [pgadmin-hackers] searchbar

Hi

On Sat, Sep 13, 2008 at 11:52 AM, M@rton Akos <makos999@gmail.com> wrote:
> Hi Developers,
>
> I'm on a half work to write a searchbar into pgadmin3.
> I think the search engine would be locate a self-sufficing float
> panel, like the object browser.

It should probably be part of the object browser, as that is what you
are search. Perhaps an expandable panel at the top or bottom.

> I've been writing a method to find object(s) in the tree and then
> collect them into another tree in the searchbar to show the result.
> The user could chose the object and via the searchbar select the item
> in the object browser.
> It's necessary when the tree in the object browser is too long and i
> forgot the name of tables.
> Please take me your opinions and tips about the code!

The first thing that springs to mind is that you cannot just search
the object browser, because a large percentage of it won't necessarily
be populated when the search is done (and the user does not
necessarily know what is and isn't populated at any given time). That
actually makes this quite difficult to do - searching every database
on every server for an object (of potentially unknown type) would be
extremely expensive once you get past just a couple of databases in
total. That cost could be reduced by requiring that a type is
specified, and further by restricting searches to a specific database.

I think we'd need to see a mockup screenshot of the UI to see how that
might pan out, as well as a rough outline of how the code will
actually work.

Sounds like an interesting project though.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers