Sunday, June 29, 2008

Re: [PERFORM] Out of memory for Select query.

On Jun 29, 2008, at 10:20 PM, Nimesh Satam wrote:

> All,
>
> While running a Select query we get the below error:
>
> ERROR: out of memory
> DETAIL: Failed on request of size 192.
>
> Postgres Conf details:
> shared_buffers = 256000
> work_mem =150000
> max_stack_depth = 16384
> max_fsm_pages = 400000
> version: 8.1.3
>
> We are using 8gb of Primary memory for the server which is used as a
> dedicated database machine.
>
> The data log shows the below message after getting the Out of memory
> error. Also attached the explain for the query. Can someone let us
> know , if have some worng parameter setup or any solution to the
> problem?
>
> Regards,
> Nimesh.
>


Hi Nimesh,

I'd try decreasing work_mem (try something smaller like 16384 and work
up if you'd like), since you have lots of hashes being built for this
query, you may simply be running into a limit on process size
depending on your platform. Also look at "ulimit -a" as the postgres
user to make sure you aren't running into any administrative limits.

Cheers,

Rusty
--
Rusty Conover
InfoGears Inc.
http://www.infogears.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Join Removal/ Vertical Partitioning

On Fri, 2008-06-27 at 17:50 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > On Thu, 2008-06-26 at 13:42 -0400, Tom Lane wrote:
> >> It might be possible to treat "ignore the RHS" as a join strategy and
> >> try to apply it while forming join relations, which would be late enough
> >> to have all the needed info available.
>
> > Oh, actually have a join node that is a no-op, with a path cost of zero?
>
> Not even that: just return the best path(s) for the LHS as the paths for
> the joinrel.

Much neater. Cool.

--
Simon Riggs

www.2ndQuadrant.com

PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] re-using cluster

Hello,

One of my hd failed recently, so I has to reinstall my  system, but my data where on other hd that did
not fail.

So I want to use that data , I tried initd -D /storage/pgCluster but I get  a "directory not empty" message, of course I
want to use that cluster.

How could I use that data?

Thanks.


Re: [HACKERS] VirtualXactLockTableInsert

On Fri, 2008-06-27 at 17:44 +0200, Florian G. Pflug wrote:
> Simon Riggs wrote:
> > When we move from having a virtual xid to having a real xid I don't
> > see any attempt to re-arrange the lock queues. Surely if there are
> > people waiting on the virtual xid, they must be moved across to wait
> > on the actual xid? Otherwise the locking queue will not be respected
> > because we have two things on which people might queue. Anybody
> > explain that?
>
> Locks on real xids serve a different purpose than locks on virtual xids.
> Locks on real xids are used to wait for transaction who touched a
> certain tuple (in which case they certainly must have acquired a real
> xid) to end. Locks on vxids on the other hand are used to wait for the
> ending of transactions which either hold a certain lock or use a
> snapshot with a xmin earlier than some point in time.
>
> indexcmds.c is the only place where VirtualXactLockTableWait() is used -
> the concurrent index creation needs to wait for all transactions to end
> which either might not know about the index (after phase 1 and 2), or
> who might still see tuples not included in the index (before marking the
> index valid).

Thanks,

So there is no attempt to migrate the vxid lock queue onto the xid lock
queue because it doesn't matter now/yet. That seems fragile, but as long
as we know about it we're OK.

--
Simon Riggs

www.2ndQuadrant.com

PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Subquery WHERE IN or WHERE EXISTS faster?

On Jun 28, 2008, at 4:07 PM, Ulrich wrote:

> Hi,
> I have added a bit of dummy Data, 100000 processors, 10000 users,
> each user got around 12 processors.
>
> I have tested both queries. First of all, I was surprised that it is
> that fast :) Here are the results:
>
>
> EXPLAIN ANALYZE SELECT speed FROM processors WHERE id IN (SELECT
> processorid FROM users_processors WHERE userid=4040) ORDER BY speed
> ASC LIMIT 10 OFFSET 1;
>
> Limit (cost=113.73..113.75 rows=7 width=5) (actual
> time=0.335..0.340 rows=10 loops=1)
> -> Sort (cost=113.73..113.75 rows=8 width=5) (actual
> time=0.332..0.333 rows=11 loops=1)
> Sort Key: processors.speed
> Sort Method: quicksort Memory: 17kB
> -> Nested Loop (cost=47.22..113.61 rows=8 width=5) (actual
> time=0.171..0.271 rows=13 loops=1)
> -> HashAggregate (cost=47.22..47.30 rows=8 width=4)
> (actual time=0.148..0.154 rows=13 loops=1)
> -> Bitmap Heap Scan on users_processors
> (cost=4.36..47.19 rows=12 width=4) (actual time=0.074..0.117 rows=13
> loops=1)
> Recheck Cond: (userid = 4040)
> -> Bitmap Index Scan on
> users_processors_userid_index (cost=0.00..4.35 rows=12 width=0)
> (actual time=0.056..0.056 rows=13 loops=1)
> Index Cond: (userid = 4040)
> -> Index Scan using processors_pkey on processors
> (cost=0.00..8.28 rows=1 width=9) (actual time=0.006..0.007 rows=1
> loops=13)
> Index Cond: (processors.id =
> users_processors.processorid)
> Total runtime: 0.471 ms
> (13 rows)
>
> ___________
>
> EXPLAIN ANALYZE SELECT speed FROM processors WHERE EXISTS (SELECT 1
> FROM users_processors WHERE userid=4040 AND
> processorid=processors.id) ORDER BY speed ASC LIMIT 10 OFFSET 1;
>
> Limit (cost=831413.86..831413.89 rows=10 width=5) (actual
> time=762.475..762.482 rows=10 loops=1)
> -> Sort (cost=831413.86..831538.86 rows=50000 width=5) (actual
> time=762.471..762.473 rows=11 loops=1)
> Sort Key: processors.speed
> Sort Method: quicksort Memory: 17kB
> -> Seq Scan on processors (cost=0.00..830299.00 rows=50000
> width=5) (actual time=313.591..762.411 rows=13 loops=1)
> Filter: (subplan)
> SubPlan
> -> Index Scan using users_processors_pkey on
> users_processors (cost=0.00..8.29 rows=1 width=0) (actual
> time=0.006..0.006 rows=0 loops=100000)
> Index Cond: ((userid = 4040) AND (processorid =
> $0))
> Total runtime: 762.579 ms
> (10 rows)
>
>
>
>
> As you can see the second query is much slower. First I thought
> "Just a difference of 0.3ms?", but then I realized that it was 762ms
> not 0.762 ;-).
> Both queries return the same result, so I will use #1 and count(*)
> takes just 0.478ms if I use query #1.
>


This is what I've found with tables ranging in the millions of rows.

Using IN is better when you've got lots of rows to check against the
IN set and the IN set may be large and possibly complicated to
retrieve (i.e. lots of joins, or expensive functions).

Postgres will normally build a hash table of the IN set and just
search that hash table. It's especially fast if the entire hash table
that is built can fit into RAM. The cpu/io cost of building the IN
set can be quite large because it needs to fetch every tuple to hash
it, but this can be faster then searching tuple by tuple through
possibly many indexes and tables like EXISTS does. I like to increase
work_mem a lot (512mb and up) if I know I'm going to be doing a lot of
matches against a large IN set of rows because I'd prefer for that
hash table to never to be written to disk.

EXISTS is better when you're doing fewer matches because it will pull
the rows out one at a time from its query possibly using indexes, its
main advantage is that it doesn't pull all of the tuples before it
starts processing matches.

So in summary both are good to know how to use, but choosing which one
to use can really depend on your data set and resources.

Cheers,

Rusty
--
Rusty Conover
InfoGears Inc.
http://www.infogears.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] VirtualXactLockTableInsert

On Fri, 2008-06-27 at 18:00 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > In cases where we know we will assign a real xid, can we just skip the
> > assignment of the virtual xid completely?
>
> Even if we could do this I doubt it would be a good idea. It'd destroy
> the invariant that all transactions have a vxid, which at the very least
> would create naming problems.

Ahh, no, I meant go straight to assigning a real xid, to avoid the
wasted effort in inserting a vxid *and* a real xid.

--
Simon Riggs

www.2ndQuadrant.com

PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] tsearch strategy for incremental search

Pierre,

you, probably, can use custom configuration, which uses pg_3chars
dictionary.

Oleg
On Mon, 30 Jun 2008, Pierre Thibaudeau wrote:

> 2008/6/30 Oleg Bartunov <oleg@sai.msu.su>:
>> tsearch will have prefix search support in 8.4.
>
> Thanks Oleg! That's fantastic news!
>
> In the meantime, carrying on with my earlier idea, here's the little
> function I came up with for extracting the 3-char-lexeme tsvector, in
> case anyone's interested:
>
> CREATE OR REPLACE FUNCTION public.to_three_char_tsvector(str text)
> RETURNS tsvector AS
> $BODY$declare
> somerow record;
> shortened_text text := '';
> BEGIN
> FOR somerow IN SELECT regexp_split_to_table(trim(both '\'' from
> strip(to_tsvector('simple',str))::text),'\'') AS item LOOP
> shortened_text := shortened_text || ' ' ||
> COALESCE(substring(somerow.item for 3), '');
> END LOOP;
> RETURN strip(to_tsvector(shortened_text));
> END;$BODY$
> LANGUAGE 'plpgsql' IMMUTABLE;
>
>

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] tsearch strategy for incremental search

2008/6/30 Oleg Bartunov <oleg@sai.msu.su>:
> tsearch will have prefix search support in 8.4.

Thanks Oleg! That's fantastic news!

In the meantime, carrying on with my earlier idea, here's the little
function I came up with for extracting the 3-char-lexeme tsvector, in
case anyone's interested:

CREATE OR REPLACE FUNCTION public.to_three_char_tsvector(str text)
RETURNS tsvector AS
$BODY$declare
somerow record;
shortened_text text := '';
BEGIN
FOR somerow IN SELECT regexp_split_to_table(trim(both '\'' from
strip(to_tsvector('simple',str))::text),'\'') AS item LOOP
shortened_text := shortened_text || ' ' ||
COALESCE(substring(somerow.item for 3), '');
END LOOP;
RETURN strip(to_tsvector(shortened_text));
END;$BODY$
LANGUAGE 'plpgsql' IMMUTABLE;

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Probably been asked a hundred times before.

I'm using Ubuntu for my development server. The live update updated
postgres either the day of or the day after 8.3.3 came out. Can't
complain about that.

Artacus

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[ANNOUNCE] == PostgreSQL Weekly News - June 29 2008 ==

== PostgreSQL Weekly News - June 29 2008 ==

July's commit-fest is starting soon.

== PostgreSQL Product News ==

pgsnap 0.4 released.
http://pgsnap.projects.postgresql.org/

PL/Proxy 2.0.6 released.
http://pgfoundry.org/projects/plproxy/

pyreplica 1.0 released.
http://pgfoundry.org/projects/pyreplica/

== PostgreSQL Jobs for June ==

http://archives.postgresql.org/pgsql-jobs/2008-06/threads.php

== PostgreSQL Local ==

pgDay Portland is July 20, just before OSCON.
http://pugs.postgresql.org/node/400

PGCon Brazil 2008 will be on September 26-27 at Unicamp in Campinas.
http://pgcon.postgresql.org.br/index.en.html

PGDay.IT 2008 will be October 17 and 18 in Prato.
http://www.pgday.org/it/

== PostgreSQL in the News ==

Planet PostgreSQL: http://www.planetpostgresql.org/

General Bits, Archives and occasional new articles:
http://www.varlena.com/GeneralBits/

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david@fetter.org, German language
to pwn@pgug.de, Italian language to pwn@itpug.org.

== Applied Patches ==

Tom Lane committed:

- Fix Gen_fmgrtab.sh to not rely on hard-wired knowledge of the column
numbers in pg_proc. Also make it not emit duplicate extern
declarations, and make it a bit more bulletproof in some other small
ways. Likewise fix the equally hard-wired, and utterly
undocumented, knowledge in the MSVC build scripts. For testing
purposes and perhaps other uses in future, pull out that portion of
the MSVC scripts into a standalone perl script equivalent to
Gen_fmgrtab.sh, and make it generate actually identical output,
rather than just more-or-less-the-same output. Motivated by looking
at Pavel's variadic function patch. Whether or not that gets
accepted, we can be sure that pg_proc's column set will change again
in the future; it's time to not have to deal with this gotcha.

- Oops, make the MSVC build put fmgroids.h where it needs to be. Per
buildfarm results.

- Reduce the alignment requirement of type "name" from int to char,
and arrange to suppress zero-padding of "name" entries in indexes.
The alignment change is unlikely to save any space, but it is really
needed anyway to make the world safe for our widespread practice of
passing plain old C strings to functions that are declared as taking
Name. In the previous coding, the C compiler was entitled to assume
that a Name pointer was word-aligned; but we were failing to
guarantee that. I think the reason we'd not seen failures is that
usually the only thing that gets done with such a pointer is
strcmp(), which is hard to optimize in a way that exploits
word-alignment. Still, some enterprising compiler guy will probably
think of a way eventually, or we might change our code in a way that
exposes more-obvious optimization opportunities. The padding change
is accomplished in one-liner fashion by declaring the "name" index
opclasses to use storage type "cstring" in pg_opclass.h. Normally
btree and hash don't allow a nondefault storage type, because they
don't have any provisions for converting the input datum to another
type. However, because name and cstring are effectively the same
thing except for padding, no conversion is needed --- we only need
index_form_tuple() to treat the datum as being cstring not name, and
this is sufficient. This seems to make for about a one-third
reduction in the typical sizes of system catalog indexes that
involve "name" columns, of which we have many. These two changes
are only weakly related, but the alignment change makes me feel
safer that the padding change won't introduce problems, so I'm
committing them together.

- Modify the recently-added probe for -Wl,--as-needed some more,
because RHEL-4 vintage Linux is even more broken than we realized: a
link to libreadline will succeed, and fail only at runtime. It
seems that an AC_TRY_RUN test is the only reliable way to check
whether this is really safe. Per report from Tatsuo Ishii.

- In pgsql/doc/src/sgml/plpgsql.sgml, clarify plpgsql documentation by
not treating IF THEN ELSE IF ... as a truly distinct version of IF.
Per suggestion from Marko Kreen.

- Improve planner's estimation of the size of an append relation:
rather than taking the maximum of any child rel's width, we should
weight the widths proportionally to the number of rows expected from
each child. In hindsight this is obviously correct because row
width is really a proxy for the total physical size of the relation.
Per discussion with Scott Carey (bug #4264).

- Consider a clause to be outerjoin_delayed if it references the
nullable side of any lower outer join, even if it also references
the non-nullable side and so could not get pushed below the outer
join anyway. We need this in case the clause is an OR clause: if it
doesn't get marked outerjoin_delayed, create_or_index_quals() could
pull an indexable restriction for the nullable side out of it,
leading to wrong results as demonstrated by today's bug report from
toruvinn. (See added regression test case for an example.) In
principle this has been wrong for quite a while. In practice I
don't think any branch before 8.3 can really show the failure,
because create_or_index_quals() will only pull out indexable
conditions, and before 8.3 those were always strict. So though we
might have improperly generated null-extended rows in the outer
join, they'd get discarded from the result anyway. The gating
factor that makes the failure visible is that 8.3 considers "col IS
NULL" to be indexable. Hence I'm not going to risk back-patching
further than 8.3.

- If pnstrdup is going to be promoted to a generally available
function, it ought to conform to the rest of palloc.h in using Size
for sizes.

- In pgsql/src/backend/access/gin/ginbulk.c, remove unnecessary
coziness of GIN code with datum copying. Now that space is tracked
via GetMemoryChunkSpace, there's really no advantage to duplicating
datumCopy's innards here. This is one bit of my toast indirection
patch that should go in anyway.

Bruce Momjian committed:

- Merge duplicate upper/lower/initcap() routines in oracle_compat.c
and formatting.c to use common code; remove duplicate functions and
support routines that are no longer needed.

- Add TODO: "Consider whether duplicate keys should be sorted by
block/offset."

- Add libpq comment about how to determine the format used for passing
binary values. Add comments to libpq C function for parameter
passing.

- Add to TODO: "Implement a module capability for loading
/contrib-style extensions."

- Add to TODO: "Allow custom variables to appear in pg_settings()."

- In pgsql/doc/src/sgml/backup.sgml, backup wording improvement.
Joshua D. Drake.

- Add URL for Merge TODO.

- Add to TODO: "Allow COPY to report errors sooner."

- Mark TODO as done: "Prevent pg_dump/pg_restore from being affected
by statement_timeout."

- Remove use of postmaster.opts.default by pg_ctl.

- Use SYSTEMQUOTE as concatentation to strings, rather than %s printf
patterns, for clarity.

- Fix 'pg_ctl restart' to preserve command-line arguments.

- Add to TODO: "Improve LDAP authentication configuration options."

- Update FAQ URLs as suggested by Curtis Gallant.

- Simplify 'pg_ctl restart' detection of first argument in
postmaster.opts.

- More FAQ URL updates from Curtis Gallant.

- Add TODO about security: "Improve server security options."

- In pgsql/src/bin/pg_ctl/pg_ctl.c, fix pg_ctl bug where detection of
binary location from postmaster.opts wasn't working.

- Add MERGE TODO URL.

- In pgsql/doc/src/sgml/syntax.sgml, use SGML table to show backslash
string escapes, rather than have them appear in a paragraph. Andy
Anderson.

- Fix 'pg_ctl reload' to properly preserve postmaster commend-line
arguments on restart. Patch to releases 8.0 - 8.3.X.

- Add to TODO: "Fix system views like pg_stat_all_tables to use
set-returning functions, rather than views of per-column functions."

- In pgsql/doc/src/sgml/libpq.sgml, mention actual function names in
documentation of how to pass binary values to libpq.

Michael Meskes committed:

- Synced parser for ecpg.

Teodor Sigaev committed:

- In pgsql/src/backend/utils/adt/formatting.c, fix bug "select
lower('asd') = 'asd'" returns false with multibyte encoding and
non-C locale. Fix is just to use correct source's length for
char2wchar call.

Magnus Hagander committed:

- In pgsql/src/interfaces/libpq/win32.mak, fix standalone libpq build
on win32. Hiroshi Saito.

== Rejected Patches (for now) ==

Raneyt AT cecs.pdx.edu's patch implementing XML output for EXPLAIN.
No documentation.

== Pending Patches ==

Heikki Linnakangas sent in another revision of his "relation forks"
patch and another that rewrites the FSM.

David Wheeler sent in two revisions of a patch to add a
case-insensitive text types to PostgreSQL.

Thomas Lee sent in two revisions of a patch to add a new GUC called
track_activity_query_size per discussion
http://archives.postgresql.org/pgsql-hackers/2008-06/msg00814.php

ITAGI Takahiro sent in two revisions of a patch to fix the
archive/restore docs for Windows.

ITAGI Takahiro sent in a patch to create an executore hook for logging
SQL statements.


---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

[HACKERS] A new take on the foot-gun meme

[ after recovering from choking... ]

Tom "spot" Callaway presents a vivid new image in this line:

> What you're doing is analogous to using a loaded shotgun as a golf club,
> and what you're suggesting is that we take the safety off, because it
> interferes with your golf game.

https://www.redhat.com/archives/fedora-devel-list/2008-June/msg01501.html

So is that a "golf club gun"?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Subquery WHERE IN or WHERE EXISTS faster?

"Jaime Casanova" <jcasanov@systemguards.com.ec> writes:
> i've made some queries run faster using EXISTS instead of large IN
> clauses... actually, it was NOT EXISTS replacing a NOT IN

That's just about entirely unrelated ...

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Subquery WHERE IN or WHERE EXISTS faster?

On Sat, Jun 28, 2008 at 10:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Ulrich <ulrich.mierendorff@gmx.net> writes:
>> People say that [EXISTS is faster]
>
> People who say that are not reliable authorities, at least as far as
> Postgres is concerned. But it is always a bad idea to extrapolate
> results on toy tables to large tables --- quite aside from measurement
> noise and caching issues, the planner might pick a different plan when
> faced with large tables. Load up a realistic amount of data and then
> see what you get.
>

i've made some queries run faster using EXISTS instead of large IN
clauses... actually, it was NOT EXISTS replacing a NOT IN

while i'm not telling EXISTS is better i actually know in some cases is better

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[PERFORM] Out of memory for Select query.

QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=2552680.87..2553501.76 rows=54726 width=135)
-> Hash Join (cost=1008.35..2550628.65 rows=54726 width=135)
Hash Cond: ("outer".s_key = "inner"."key")
-> Hash Join (cost=803.07..2548507.96 rows=54726 width=108)
Hash Cond: ("outer".cp_key = "inner"."key")
-> Hash Join (cost=687.96..2547161.51 rows=54726 width=82)
Hash Cond: ("outer".ch_key = "inner"."key")
-> Hash Join (cost=601.21..2545843.43 rows=54726 width=60)
Hash Cond: ("outer".cr_key = "inner"."key")
-> Hash Join (cost=265.72..2544002.98 rows=54726 width=62)
Hash Cond: ("outer".go_key = "inner"."key")
-> Hash Join (cost=23.12..2541844.97 rows=54726 width=62)
Hash Cond: ("outer".adv_key = "inner"."key")
-> Nested Loop (cost=0.00..2541000.08 rows=54901 width=48)
-> Nested Loop (cost=0.00..4.14 rows=2 width=16)
-> Seq Scan on r_n (cost=0.00..1.06 rows=1 width=4)
Filter: (id = 607)
-> Index Scan using r_d_sqldt_idx on r_da (cost=0.00..3.05 rows=2 width=12)
Index Cond: ((sqldate >= '2008-01-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2008-01-31 00:00:00'::timestamp without time zone))
-> Index Scan using r_m_nw_date_idx on r_m (cost=0.00..1259386.12 rows=740790 width=48)
Index Cond: ((r_m.nw_key = "outer"."key") AND (r_m.date_key = "outer"."key"))
-> Hash (cost=20.77..20.77 rows=939 width=22)
-> Seq Scan on r_adv (cost=0.00..20.77 rows=939 width=22)
Filter: ((name)::text <> 'SYSTEM'::text)
-> Hash (cost=218.28..218.28 rows=9728 width=8)
-> Seq Scan on rg (cost=0.00..218.28 rows=9728 width=8)
-> Hash (cost=302.39..302.39 rows=13239 width=6)
-> Seq Scan on rc (cost=0.00..302.39 rows=13239 width=6)
-> Hash (cost=77.20..77.20 rows=3820 width=30)
-> Seq Scan on r_c (cost=0.00..77.20 rows=3820 width=30)
-> Hash (cost=104.89..104.89 rows=4089 width=34)
-> Seq Scan on r_cm (cost=0.00..104.89 rows=4089 width=34)
-> Hash (cost=181.42..181.42 rows=9542 width=35)
-> Seq Scan on r_s (cost=0.00..181.42 rows=9542 width=35)
(34 rows)
All,
 
While running a Select query we get the below error:
 
ERROR:  out of memory
DETAIL:  Failed on request of size 192.
 
Postgres Conf details:
shared_buffers = 256000
work_mem =150000
max_stack_depth = 16384
max_fsm_pages = 400000
version: 8.1.3
 
We are using 8gb of Primary memory for the server which is used as a dedicated database machine.
 
The data log shows the below message after getting the Out of memory error. Also attached the explain for the query. Can someone let us know , if have some worng parameter setup or any solution to the problem?
 
Regards,
Nimesh.
 

TopMemoryContext: 57344 total in 6 blocks; 9504 free (12 chunks); 47840 used
TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 used
Type information cache: 8192 total in 1 blocks; 1864 free (0 chunks); 6328 used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
MessageContext: 1040384 total in 7 blocks; 263096 free (4 chunks); 777288 used
JoinRelHashTable: 8192 total in 1 blocks; 3888 free (0 chunks); 4304 used
smgr relation table: 8192 total in 1 blocks; 1840 free (0 chunks); 6352 used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 856 free (0 chunks); 168 used
ExecutorState: 122880 total in 4 blocks; 51840 free (6 chunks); 71040 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 2089044 total in 8 blocks; 573232 free (12 chunks); 1515812 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 2080768 total in 7 blocks; 749448 free (11 chunks); 1331320 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 245760 total in 4 blocks; 109112 free (4 chunks); 136648 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 1032192 total in 6 blocks; 504104 free (8 chunks); 528088 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 1032192 total in 6 blocks; 474456 free (8 chunks); 557736 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 2080768 total in 7 blocks; 783856 free (11 chunks); 1296912 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
.
.
.

AggContext: 941613056 total in 129 blocks; 13984 free (154 chunks); 941599072 used
TupleHashTable: 113303576 total in 24 blocks; 1347032 free (74 chunks); 111956544 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used
CacheMemoryContext: 516096 total in 6 blocks; 12080 free (0 chunks); 504016 used
rg_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
rg_idx: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
rg_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
rc_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_c_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_c_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_ch_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_ch_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_ch_cd: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_cm_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_c_m_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_s_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_p_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_p_cd_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_a_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_a_v_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_d_sqldt_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_da_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_nw_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_n_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
r_m_network_date_idx: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0 chunks); 832 used
pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks; 192 free (0 chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
MdSmgr: 8192 total in 1 blocks; 5584 free (0 chunks); 2608 used
LockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
Timezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
2008-06-29 20:48:25 PDT [13980]: [5-1] ERROR:  out of memory
2008-06-29 20:48:25 PDT [13980]: [6-1] DETAIL:  Failed on request of size 192.

 
 
 
 
 

Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Robert Treat wrote:
>> Certainly not desired by a number of people I have talked to, but I don't have
>> much hope in seeing the behavoir change... perhaps someday if we get around
>> to merging pg_dump and pg_dumpall....

> I have never heard anyone say the current behavior is something they desired.

So put forward a worked-out proposal for some other behavior.

My first thought is that the -c and -C options create a lot of the
issues in this area. -c in particular is evidently meant for merging a
dump into a database that already contains unrelated objects. (In fact
you could argue that the *default* behavior is meant for this, -c just
changes the result for conflicts.) It seems unlikely that having
pg_dump issue ALTER DATABASE SET commands is a good idea in all of these
scenarios.

I'm also wondering why it'd be bright to treat ALTER ... SET properties
different from, say, database owner and encoding properties.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] sequence scan problem

John Beaver <john.e.beaver@gmail.com> writes:
> Can anyone explain this? There should only ever be a maximum of about 50
> rows returned when the query is executed.

Is the estimate that 197899 rows of gene_prediction_view have
go_term_ref = 2 about right? If not, then we need to talk about
fixing your statistics. If it is in the right ballpark then I do
not see *any* plan for this query that runs in small time.
The only way to avoid a seqscan on functional_linkage_scores would
be to do 198K^2 index probes into it, one for each combination of
matching fs1 and fs2 rows; I can guarantee you that that's not a win.

The fact that the planner is estimating 352770803726 result rows
compared to your estimate of 50 offers some hope that it's a stats
problem, but ...

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)

Robert Treat wrote:
> On Friday 27 June 2008 12:58:41 Richard Huxton wrote:

> > > Am I doing something stupid here?
> >
> > OK - so to get the ALTER DATABASE commands I need to dump the schema for
> > the entire cluster. Is that really desired behaviour?
>
> Certainly not desired by a number of people I have talked to, but I don't have
> much hope in seeing the behavoir change... perhaps someday if we get around
> to merging pg_dump and pg_dumpall....

I have never heard anyone say the current behavior is something they desired.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Return Query

On Sun, Jun 29, 2008 at 6:19 PM, Edwin Quijada
<listas_quijada@hotmail.com> wrote:
>
> Consigo este error cuando lo hago asi
>
> ERROR: structure of query does not match function result type
> CONTEXT: PL/pgSQL function "f1" line 2 at RETURN QUERY
>

fijate que debe haber tantos parametros OUT como columnas en el select

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)

On Friday 27 June 2008 12:58:41 Richard Huxton wrote:
> Richard Huxton wrote:
> > Richard Huxton wrote:
> >> At present it means you can't reliably do:
> >> DROP DATABASE foo;
> >> pg_restore --create foo.dump
> >> I'd then have to either hand edit the dumpall dump or wade through a
> >> bunch of errors checking that none of them were relevant.
> >
> > Actually, I'm not sure pg_dumpall does them either.
>
> [snip]
>
> > Am I doing something stupid here?
>
> OK - so to get the ALTER DATABASE commands I need to dump the schema for
> the entire cluster. Is that really desired behaviour?
>

Certainly not desired by a number of people I have talked to, but I don't have
much hope in seeing the behavoir change... perhaps someday if we get around
to merging pg_dump and pg_dumpall....

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] tsearch strategy for incremental search

I am trying to implement an incremental search engine. The service
should start searching when the user has typed at least 3 characters.
I am thinking of using the following strategy:

a) Create a function string_to_three_char_tsvector(str text) that
would generate the tsvector composed of the three-letter lexemes that
begin all the admissible words within the parameter str.
b) Using this function, create an indexed tsvector column: three_char_index.
c) Given the query string query_string (assume query_string containing
at least 3 characters):
SELECT *
FROM mytable, plainto_tsquery((string_to_three_char_tsvector(query_string))::text)
AS query
WHERE three_char_index @@ query
AND text_field LIKE '%' || str || '%';

Once I've narrowed the field of possibilities down to the correct
3-letter lexemes, there are fewer than 100 lines to search through
with LIKE. I could even repeat the exercise with 4-letter lexemes if
these numbers were to grow or if I needed the extra boost in
performance.

So, two questions to postgres/tsearch experts:

1) Does that seem like a decent overall strategy?

2) About the function string_to_three_char_tsvector(text), I cannot
think of an elegant way of writing this. Is it possible to do better
than the following:

str => cast to tsvector => cast to text => for each lexeme-string,
take first-three-char substring => concat back together => cast to
tsvector

Is there a nice way of performing the middle operation? Like
splitting the string to an array...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] indices

Fijate que en ambas consultas no estas filtrando por ningun campo, por lo cual, la consulta barre toda la tabla (full access).
En este caso en particular la creacion de indices no te ayuda en nada.
Distinta seria la historia si en al consulta pusieras algo del estilo "where ddd > algo " o algun caso por el estilo....

saludos Pablo


El día 29 de junio de 2008 21:37, Sandrigo L.Jara <sandrigo.lezcano@gmail.com> escribió:
migre 22 millones de registros en una tabla en PostgreSQL 8.2
ddds, numeros..etc   ..ddds del tipo: character varying(3)
que quando ejecuto:
 
-- sin indices
-- Executing query:
SELECT ddd, count(ddd) AS cantidad FROM tabla GROUP BY 1 ORDER BY 1
Total query runtime: 432421 ms.
65 rows retrieved.
 
-- Executing query:
CREATE INDEX tabla_idx_ddd ON tabla (ddd)
Query returned successfully with no result in 496755 ms.
 
nuevamente ahora ya con indice...
 
-- Executing query:
SELECT ddd, count(ddd) AS cantidad FROM tabla GROUP BY 1 ORDER BY 1
Total query runtime: 404579 ms.
65 rows retrieved.
 
deberia ser mucho!!!!   mas rapido.. no??? 
agradezco todas las sugerencias
 
Abrazos a todos



--
Pablo Marrero
Tel: 099 927 566

[pgsql-es-ayuda] indices

migre 22 millones de registros en una tabla en PostgreSQL 8.2
ddds, numeros..etc   ..ddds del tipo: character varying(3)
que quando ejecuto:
 
-- sin indices
-- Executing query:
SELECT ddd, count(ddd) AS cantidad FROM tabla GROUP BY 1 ORDER BY 1
Total query runtime: 432421 ms.
65 rows retrieved.
 
-- Executing query:
CREATE INDEX tabla_idx_ddd ON tabla (ddd)
Query returned successfully with no result in 496755 ms.
 
nuevamente ahora ya con indice...
 
-- Executing query:
SELECT ddd, count(ddd) AS cantidad FROM tabla GROUP BY 1 ORDER BY 1
Total query runtime: 404579 ms.
65 rows retrieved.
 
deberia ser mucho!!!!   mas rapido.. no??? 
agradezco todas las sugerencias
 
Abrazos a todos

Re: [pgsql-es-ayuda] [OT] Cubanos en la lista

Saludos, no soy un experto, pero si te puedo asegurar que por esta vía he
logrado aclararme muchas dudas y siempre se aprende muchísimo de los
criterios de todos los que de una forma u otra participamos en el foro,
dando criterios y apoyando a los que como nosotros un día comenzaron.

Muchas gracias y claro que te tendré presente, demás está decirte que puedes
contar com mi ayuda.


----- Original Message -----
From: "Gabriel Hermes Colina Zambra" <hermeszambra@yahoo.com>
To: "Reynier Perez Mira" <rperezm@uci.cu>; <pgsql-es-ayuda@postgresql.org>;
"Edwin Quijada" <listas_quijada@hotmail.com>; "Juan Carlos Badillo Goy"
<badillo@cav.desoft.cu>
Sent: Sunday, June 29, 2008 7:53 PM
Subject: Re: [pgsql-es-ayuda] [OT] Cubanos en la lista

Esimado Juan Carlos, en PostgreSQL soy de "medio pelo" y ultimamente
confieso que he tenido un deficit de en la ayuda, pero siempre estoy
dispuesto a colaborar, en PHP me confieso ignorante, salvo por dos o tres
cositas chicas que implemente.

Como vengo de Visual Basic, gambas es el proyecto que adopte y me gusta
mucho, sobre todo por que esta bien orientado a objetos.

Si en algo te puede servir aca estoy para darte una mano. mi gtalk
ghcolina@gmail.com

Atte.
Gabriel Colina

__________________________________________________
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
Regístrate ya - http://correo.espanol.yahoo.com/
--
TIP 7: no olvides aumentar la configuración del "free space map"

--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
(envía "unregister TuDirecciónDeCorreo" a majordomo@postgresql.org)

Re: [pgsql-es-ayuda] [OT] Cubanos en la lista

Esimado Juan Carlos, en PostgreSQL soy de "medio pelo" y ultimamente confieso que he tenido un deficit de en la ayuda, pero siempre estoy dispuesto a colaborar, en PHP me confieso ignorante, salvo por dos o tres cositas chicas que implemente.

Como vengo de Visual Basic, gambas es el proyecto que adopte y me gusta mucho, sobre todo por que esta bien orientado a objetos.

Si en algo te puede servir aca estoy para darte una mano. mi gtalk ghcolina@gmail.com

Atte.
Gabriel Colina

__________________________________________________
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
Regístrate ya - http://correo.espanol.yahoo.com/
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [pgsql-es-ayuda] [OT] Cubanos en la lista

Saludos, Reynier

Trabajo en Desoft Ciego de Avila y soy el jefe del proyecto de Gestión
Documental con Software libre, debo comenzar a trabajar con la facultad 10
que radica allí en la UCI, ayudandolos en la preparación en PostgreSQL para
que nos apoyen en el desarrollo de AvilaDOC un sistema en PHP y PostgreSQL.
El punto es que puedes contar con mi ayuda cuando me incorpore all'i en la
UCI en los proximos días.

Esperemos que se sigan sumando seguidores de PostgreSQL.


----- Original Message -----
From: "Reynier Perez Mira" <rperezm@uci.cu>
To: <hermeszambra@yahoo.com>; <pgsql-es-ayuda@postgresql.org>; "Edwin
Quijada" <listas_quijada@hotmail.com>
Sent: Sunday, June 29, 2008 7:20 PM
Subject: RE: [pgsql-es-ayuda] [OT] Cubanos en la lista


Hola Gabriel y Edwin:
No se de que va este tema pero bueno vamos a aclarar algunas cosas que dice
Gabriel.

Trabajo en la UCI (Universidad de Ciencias Informáticas) en Cuba y no me
encargo de los Proyectos Productivos directamente sino de casi toda la parte
tecnológica en la cual entra PostgreSQL como SGBD. Antes úsabamos MySQL y
algunas personas de forma aislada PostgreSQL pero como los EE.UU son los
EE.UU pues ahora Sun compró MySQL nosotros no podemos descargar nada desde
los servidores de Sun porque esos si que aplican la política de verdad. Por
tanto se ha decidido ir migrando todos nuestros sistemas de MySQL y algunos
que restan hoy en SQL Server a PostgreSQL.

Al igual como dice Gabriel en la medida de que vayamos aprendiendo
PostgreSQL podemos ayudar a aquellos que recién comienzan porque soy el
ejemplo vivo, gracias a la lista de PostgreSQL en español y a todos los que
aquí ayudan he podido adquirir un nivel básico de PostgreSQL pues la
asignatura SGBD la di en el 2do año de mis estudios.

Salu2 y espero por la respuesta de Edwin

Nota: Anteriormente no vi el mensaje sino hubiese respondido de inmediato

Ing. Reynier Pérez Mira
Grupo de Soporte al Desarrollo - Dirección Técnica IP

> Contactate con el amigo Reynier reynierpm@gmail.com es su gtalk, de la UCI
> (Universidad Cubana de Informatica), seguro le interese ya que es uno de
> los que se encarga de los proyectos productivos de la universidad, que por
> cierto adoptaron postgresql. Creo que en poco tiempo van a aportanos
> muchisimo, puesto que para ellos el software libre es una tarea de
> militancia.
>
> Un abrazo desde Uruguay
> Gabriel Colina
>
>
> __________________________________________________
> Correo Yahoo!
> Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
> Regístrate ya - http://correo.espanol.yahoo.com/
> --
> TIP 1: para suscribirte y desuscribirte, visita
> http://archives.postgresql.org/pgsql-es-ayuda
--
TIP 4: No hagas 'kill -9' a postmaster

--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
(envía "unregister TuDirecciónDeCorreo" a majordomo@postgresql.org)

Re: [GENERAL] libpq block allocated before my malloc handler inits?

rob wrote:
> I am trying to build a small program with libpq as the interface to a
> Postgre database. I am using the most current version. My program
> uses malloc and free hooks to manage memory without having to request
> memory from the system all the time. I expected that the init
> function (__malloc_initialize_hook) would run before anything else,
> but after opening a number of connections to the database with
> PQconnectdb, my program blows up because of a free which refers to a
> block of memory that wasn't allocated using my malloc function. My
> program runs without a hitch if I comment out the PQconnectdb function
> calls.
I've experienced an openSSL, libpq + other library using SSL bug
recently. Do you get the same crash is you explicitly disable SSL in
the connection string? sslmode=disable.

Thanks

Russell

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] Postgres y jdbc

Edgar Enriquez escribió:
> La pregrunta es si es posible de enviar md5 en una cadena de conección
> con postgresql-8.2-508.jdbc4?

No entiendo la pregunta. Si el servidor especifica el método md5 en
pg_hba.conf, entonces el envío se hará en md5, independiente de cómo la
pongas en el código fuente. (A diferencia del método password en
pg_hba.conf, que hace que la password se envíe en texto sin cifrar)

Dice la documentación:
http://www.postgresql.org/docs/8.3/static/auth-pg-hba-conf.html
md5

Require the client to supply an MD5-encrypted password for
authentication. See Section 21.2.2 for details.

password

Require the client to supply an unencrypted password for
authentication. Since the password is sent in clear text over the
network, this should not be used on untrusted networks. It also does
not usually work with threaded client applications. See Section
21.2.2 for details.


La sección 21.2.2 es
http://www.postgresql.org/docs/8.3/static/auth-methods.html#AUTH-PASSWORD
que dice lo siguiente:

The password-based authentication methods are md5, crypt, and
password. These methods operate similarly except for the way
that the password is sent across the connection: respectively,
MD5-hashed, crypt-encrypted, and clear-text. A limitation is
that the crypt method does not work with passwords that have
been encrypted in pg_authid.

If you are at all concerned about password "sniffing" attacks
then md5 is preferred, with crypt to be used only if you must
support pre-7.2 clients. Plain password should be avoided
especially for connections over the open Internet (...)

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
TIP 8: explain analyze es tu amigo

RE: [pgsql-es-ayuda] [OT] Cubanos en la lista

Hola Gabriel y Edwin:
No se de que va este tema pero bueno vamos a aclarar algunas cosas que dice Gabriel.

Trabajo en la UCI (Universidad de Ciencias Informáticas) en Cuba y no me encargo de los Proyectos Productivos directamente sino de casi toda la parte tecnológica en la cual entra PostgreSQL como SGBD. Antes úsabamos MySQL y algunas personas de forma aislada PostgreSQL pero como los EE.UU son los EE.UU pues ahora Sun compró MySQL nosotros no podemos descargar nada desde los servidores de Sun porque esos si que aplican la política de verdad. Por tanto se ha decidido ir migrando todos nuestros sistemas de MySQL y algunos que restan hoy en SQL Server a PostgreSQL.

Al igual como dice Gabriel en la medida de que vayamos aprendiendo PostgreSQL podemos ayudar a aquellos que recién comienzan porque soy el ejemplo vivo, gracias a la lista de PostgreSQL en español y a todos los que aquí ayudan he podido adquirir un nivel básico de PostgreSQL pues la asignatura SGBD la di en el 2do año de mis estudios.

Salu2 y espero por la respuesta de Edwin

Nota: Anteriormente no vi el mensaje sino hubiese respondido de inmediato

Ing. Reynier Pérez Mira
Grupo de Soporte al Desarrollo - Dirección Técnica IP

> Contactate con el amigo Reynier reynierpm@gmail.com es su gtalk, de la UCI
> (Universidad Cubana de Informatica), seguro le interese ya que es uno de
> los que se encarga de los proyectos productivos de la universidad, que por
> cierto adoptaron postgresql. Creo que en poco tiempo van a aportanos
> muchisimo, puesto que para ellos el software libre es una tarea de
> militancia.
>
> Un abrazo desde Uruguay
> Gabriel Colina
>
>
> __________________________________________________
> Correo Yahoo!
> Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
> Regístrate ya - http://correo.espanol.yahoo.com/
> --
> TIP 1: para suscribirte y desuscribirte, visita
> http://archives.postgresql.org/pgsql-es-ayuda
--
TIP 4: No hagas 'kill -9' a postmaster

RE: [pgsql-es-ayuda] Return Query

> Date: Sun, 29 Jun 2008 00:29:57 -0500
> From: jcasanov@systemguards.com.ec
> To: listas_quijada@hotmail.com
> Subject: Re: [pgsql-es-ayuda] Return Query
> CC: pgsql-es-ayuda@postgresql.org
>
> On Sat, Jun 28, 2008 at 12:26 PM, Edwin Quijada
> wrote:
>>>>
>>>> Consigo el sigte error:
>>>> ERROR: a column definition list is required for functions returning "record"
>>>>
>>>
>>> create or replace function f1(out id_record integer, out alias text)
>>> returns setof record as
>>> $$
>>> begin
>>> return query select id_record,alias from uno_email_alias;
>>> end;
>>> $$ language plpgsql;
>>>
>> Jaime, como la invoco?
>
> select * from f1();

Consigo este error cuando lo hago asi

ERROR: structure of query does not match function result type
CONTEXT: PL/pgSQL function "f1" line 2 at RETURN QUERY

*-------------------------------------------------------*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-849-8087

* " Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo comun"
*-------------------------------------------------------*

>
> --
> Atentamente,
> Jaime Casanova
> Soporte y capacitación de PostgreSQL
> Guayaquil - Ecuador
> Cel. (593) 87171157
> --
> TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

_________________________________________________________________
Send funny voice messages packed with tidbits from MSN. Everyone wants to be ready.
http://www.noonewantstolookdumb.com?OCID=T001MSN54N1613A--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [GENERAL] libpq block allocated before my malloc handler inits?

rob wrote:

> Is there something that runs before my malloc init function in libpq?
> If there is, is there a way to make it happen afterwards? Is there a
> library interface which doesn't have hidden init functions?

I don't see anything in our code that would initialize stealthily.
Maybe it's something we're being linked against -- pthread perhaps?
OpenSSL? Maybe you'd have more luck if you noted what is libpq doing
when this invalid free is called. Perhaps call abort() and get a stack
trace from the resulting dump.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] [OT] Cubanos en la lista

--- El sáb 28-jun-08, Edwin Quijada <listas_quijada@hotmail.com> escribió:

> De: Edwin Quijada <listas_quijada@hotmail.com>
> Asunto: [pgsql-es-ayuda] [OT] Cubanos en la lista
> A: pgsql-es-ayuda@postgresql.org
> Fecha: sábado, 28 junio, 2008, 4:08 pm
> Hay algun cubano por aca?
> Les tengo un regalo, contactenme antes del lunes. Pero que
> vivan en Cuba.
>
Edwin.

Contactate con el amigo Reynier reynierpm@gmail.com es su gtalk, de la UCI (Universidad Cubana de Informatica), seguro le interese ya que es uno de los que se encarga de los proyectos productivos de la universidad, que por cierto adoptaron postgresql. Creo que en poco tiempo van a aportanos muchisimo, puesto que para ellos el software libre es una tarea de militancia.

Un abrazo desde Uruguay
Gabriel Colina


__________________________________________________
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
Regístrate ya - http://correo.espanol.yahoo.com/
--
TIP 1: para suscribirte y desuscribirte, visita http://archives.postgresql.org/pgsql-es-ayuda

Re: [PERFORM] sequence scan problem

Oh, and the version is 8.3.3.

Jeremy Harris wrote:
> John Beaver wrote:
>> I'm having a strange problem with a query. The query is fairly
>> simple, with a few constants and two joins. All relevant columns
>> should be indexed, and I'm pretty sure there aren't any type
>> conversion issues. But the query plan includes a fairly heavy seq
>> scan. The only possible complication is that the tables involved are
>> fairly large - hundreds of millions of rows each.
>>
>> Can anyone explain this? There should only ever be a maximum of about
>> 50 rows returned when the query is executed.
>
> You didn't say when you last vacuumed?
> If there should only be 50 rows returned then the estimates from the
> planner are way out.
>
> If that doesn't help, we'll need version info, and (if you can afford
> the time) an "explain analyze"
>
> Cheers,
> Jeremy
>

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [BUGS] BUG #4274: uuid returns duplicate values

Hi.

I'm sorry delaying release bugfix was not included in pg8.3.3.
Please try this.

http://winpg.jp/~saito/pg_work/OSSP_win32/

Regards,
Hiroshi Saito

>
>The following bug has been logged online:
>
>Bug reference: 4274
>Logged by: eric melbardis
>Email address: eric.melbardis@netkitsolutions.com
>PostgreSQL version: 8.3.3
>Operating system: windows xp sp3
>Description: uuid returns duplicate values
>Details:
>
>the uuid functions do not return unique values if used sequntially, or i
>believe without too much tme in between invocations.
>
>the following test functions returns the same value!
>
>------------ test function ---------------
>create or replace function test_uuid()
>returns varchar as
>$body$
>begin
> raise notice 'uuid = %', uuid_generate_v4();
> raise notice 'uuid = %', uuid_generate_v4();
> raise notice 'uuid = %', uuid_generate_v4();
> raise notice 'uuid = %', uuid_generate_v4();
>end;
>$body$
>language 'plpgsql' ;
>
>--
>Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-bugs

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] sequence scan problem

Jeremy Harris wrote:
> John Beaver wrote:
>> I'm having a strange problem with a query. The query is fairly
>> simple, with a few constants and two joins. All relevant columns
>> should be indexed, and I'm pretty sure there aren't any type
>> conversion issues. But the query plan includes a fairly heavy seq
>> scan. The only possible complication is that the tables involved are
>> fairly large - hundreds of millions of rows each.
>>
>> Can anyone explain this? There should only ever be a maximum of about
>> 50 rows returned when the query is executed.
>
> You didn't say when you last vacuumed?
I ran 'vacuum analyze' on both tables directly after I finished building
them, and I haven't updated their contents since.
> If there should only be 50 rows returned then the estimates from the
> planner are way out.
>
> If that doesn't help, we'll need version info, and (if you can afford
> the time) an "explain analyze"
Sure, I'm running it now. I'll send the results when it's done, but yes,
it could take a while.
>
> Cheers,
> Jeremy
>

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] sequence scan problem

John Beaver wrote:
> I'm having a strange problem with a query. The query is fairly simple,
> with a few constants and two joins. All relevant columns should be
> indexed, and I'm pretty sure there aren't any type conversion issues.
> But the query plan includes a fairly heavy seq scan. The only possible
> complication is that the tables involved are fairly large - hundreds of
> millions of rows each.
>
> Can anyone explain this? There should only ever be a maximum of about 50
> rows returned when the query is executed.

You didn't say when you last vacuumed?
If there should only be 50 rows returned then the estimates from the
planner are way out.

If that doesn't help, we'll need version info, and (if you can afford
the time) an "explain analyze"

Cheers,
Jeremy

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[PERFORM] sequence scan problem

I'm having a strange problem with a query. The query is fairly simple,
with a few constants and two joins. All relevant columns should be
indexed, and I'm pretty sure there aren't any type conversion issues.
But the query plan includes a fairly heavy seq scan. The only possible
complication is that the tables involved are fairly large - hundreds of
millions of rows each.

Can anyone explain this? There should only ever be a maximum of about 50
rows returned when the query is executed.

Query:

select fls.function_verified, fls.score, fls.go_category_group_ref,
fs1.gene_ref, fs1.function_verified_exactly, fs2.gene_ref,
fs2.function_verified_exactly from functional_linkage_scores fls,
gene_prediction_view fs1, gene_prediction_view fs2 where fls.gene_ref1 =
fs1.gene_ref and fls.gene_ref2 = fs2.gene_ref and fs1.go_term_ref = 2
and fs2.go_term_ref = 2

Explain on query:
Merge Join (cost=1331863800.16..6629339921.15 rows=352770803726 width=22)
Merge Cond: (fs2.gene_ref = fls.gene_ref2)
-> Index Scan using gene_prediction_view_gene_ref on
gene_prediction_view fs2 (cost=0.00..6235287.98 rows=197899 width=5)
Index Cond: (go_term_ref = 2)
-> Materialize (cost=1331794730.41..1416453931.72 rows=6772736105
width=21)
-> Sort (cost=1331794730.41..1348726570.67 rows=6772736105
width=21)
Sort Key: fls.gene_ref2
-> Merge Join (cost=38762951.04..146537410.33
rows=6772736105 width=21)
Merge Cond: (fs1.gene_ref = fls.gene_ref1)
-> Index Scan using gene_prediction_view_gene_ref
on gene_prediction_view fs1 (cost=0.00..6235287.98 rows=197899 width=5)
Index Cond: (go_term_ref = 2)
-> Materialize (cost=38713921.60..41618494.20
rows=232365808 width=20)
-> Sort (cost=38713921.60..39294836.12
rows=232365808 width=20)
Sort Key: fls.gene_ref1
-> Seq Scan on
functional_linkage_scores fls (cost=0.00..3928457.08 rows=232365808
width=20)


\d on functional_linkage_scores (232241678 rows):
Table "public.functional_linkage_scores"
Column | Type |
Modifiers
-----------------------+---------------+------------------------------------------------------------------------
id | integer | not null default
nextval('functional_linkage_scores_id_seq'::regclass)
gene_ref1 | integer | not null
gene_ref2 | integer | not null
function_verified | boolean | not null
score | numeric(12,4) | not null
go_category_group_ref | integer | not null
go_term_ref | integer |
Indexes:
"functional_linkage_scores_pkey" PRIMARY KEY, btree (id)
"functional_linkage_scores_gene_ref1_key" UNIQUE, btree (gene_ref1,
gene_ref2, go_category_group_ref, go_term_ref)
"ix_functional_linkage_scores_gene_ref2" btree (gene_ref2)
Foreign-key constraints:
"functional_linkage_scores_gene_ref1_fkey" FOREIGN KEY (gene_ref1)
REFERENCES genes(id)
"functional_linkage_scores_gene_ref2_fkey" FOREIGN KEY (gene_ref2)
REFERENCES genes(id)
"functional_linkage_scores_go_category_group_ref_fkey" FOREIGN KEY
(go_category_group_ref) REFERENCES go_category_groups(id)

\d on gene_prediction_view (568654245 rows):
Table
"public.gene_prediction_view"
Column | Type
| Modifiers
----------------------------------+------------------------+-------------------------------------------------------------------
id | integer | not null
default nextval('gene_prediction_view_id_seq'::regclass)
gene_ref | integer | not null
go_term_ref | integer | not null
go_description | character varying(200) | not null
go_category | character varying(50) | not null
function_verified_exactly | boolean | not null
function_verified_with_parent_go | boolean | not null
score | numeric(12,4) | not null
prediction_method_ref | integer |
functional_score_ref | integer |
Indexes:
"gene_prediction_view_pkey" PRIMARY KEY, btree (id)
"gene_prediction_view_functional_score_ref_key" UNIQUE, btree
(functional_score_ref)
"gene_prediction_view_gene_ref" UNIQUE, btree (gene_ref,
go_term_ref, prediction_method_ref)
Foreign-key constraints:
"gene_prediction_view_functional_score_ref_fkey" FOREIGN KEY
(functional_score_ref) REFERENCES functional_scores(id)
"gene_prediction_view_gene_ref_fkey" FOREIGN KEY (gene_ref)
REFERENCES genes(id)
"gene_prediction_view_go_term_ref_fkey" FOREIGN KEY (go_term_ref)
REFERENCES go_terms(term)

...and just in case someone can give advice on more aggressive settings
that might help out the planner for this particular comptuer...
This computer: Mac Pro / 4 gigs ram / software Raid 0 across two hard
drives.
Production computer: Xeon 3ghz / 32 gigs ram / Debian


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[COMMITTERS] pgscript - pgScript: Updated documentation and TODO

Log Message:
-----------
Updated documentation and TODO

Modified Files:
--------------
pgScript/doc:
docbook.css (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgscript/pgScript/doc/docbook.css.diff?r1=1.1&r2=1.2)
INDEX.html (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgscript/pgScript/doc/INDEX.html.diff?r1=1.1&r2=1.2)
INDEX.xml (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgscript/pgScript/doc/INDEX.xml.diff?r1=1.1&r2=1.2)
pgScript:
TODO (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgscript/pgScript/TODO.diff?r1=1.1&r2=1.2)

Removed Files:
-------------
pgScript/doc:
README
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgscript/pgScript/doc/README)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] freebsd + postgresql 8.3.3 = 100% of cpu usage on stats collector?

On Sat, 28 Jun 2008 11:35:24 +0200
hubert depesz lubaczewski <depesz@depesz.com> wrote:

> On Sat, Jun 28, 2008 at 02:22:26AM -0300, Rodrigo Gonzalez wrote:
> > So, read man page, and at least ktrace -C and read the output at
> > ktrace.out file.
>
> ok, i've read it and didn't understand. it says how to disable tracing
> but it doesn't say anything about enabling tracing.

Use "ktrace -p [pid]" to start tracing. Then use "ktrace -C" to stop
tracing. Trace data is dumped in binary format to the file ktrace.out
(unless you use the -f option to specify another file)

Use the kdump utility to convert the ktrace.out file to something usable.
Something like "kdump > ktrace.txt" will probably get you what you want,
assuming your ktrace file is ktrace.out.

--
Bill Moran <wmoran@collaborativefusion.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsql: Remove unnecessary coziness of GIN code with datum copying.

Log Message:
-----------
Remove unnecessary coziness of GIN code with datum copying. Now that
space is tracked via GetMemoryChunkSpace, there's really no advantage
to duplicating datumCopy's innards here. This is one bit of my toast
indirection patch that should go in anyway.

Modified Files:
--------------
pgsql/src/backend/access/gin:
ginbulk.c (r1.11 -> r1.12)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/access/gin/ginbulk.c?r1=1.11&r2=1.12)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[HACKERS] WIP patch: reducing overhead for repeat de-TOASTing

Attached is a worked-out patch for the approach proposed here:
http://archives.postgresql.org/pgsql-hackers/2008-06/msg00777.php
namely, that cache management for de-TOASTed datums is handled
by TupleTableSlots.

To avoid premature detoasting of values that we might never need, the
patch introduces a concept of an "indirect TOAST pointer", which has
the same 0x80 or 0x01 header as an external TOAST pointer, but can
be told apart by having a different length byte. Within that we have
* pointer to original toasted field within the Slot's tuple
* pointer to the owning Slot
* pointer to decompressed copy, or NULL if not decompressed yet
Some fairly straightforward extensions to the TupleTableSlot code,
heaptuple.c, and tuptoaster.c make it all go.

My original thoughts had included turning FREE_IF_COPY() into a no-op,
but on investigation that seems impractical. One case that still
depends on that pfree is where we have palloc'd a 4-byte-header copy
of a short-header datum to support code that needs properly aligned
datum content. The solution adopted in the patch is to arrange for
pfree() applied to a cacheable detoasted object to be a no-op, whereas
it still works normally for non-cached detoasted objects. We do this
by inserting a dummy chunk header that points to a dummy memory context
whose pfree support method does nothing. I think this part of the patch
would be required for any toast caching method, not just this one.

What I like about this patch is that it's a fairly small-footprint
change, it doesn't add much overhead, and it covers caching of
decompression for in-line-compressed datums as well as the out-of-line
case.

One thing I really *don't* like about it is that it requires everyplace
that copies Datums to know about indirect pointers: in general, the copy
must be a copy of the original toasted Datum, not of the indirect
pointer, else we have indirect pointers that can outlive their owning
TupleTableSlot (or at least outlive its current tuple cycle). There
only seem to be about half a dozen such places in the current code,
but still it seems a rather fragile coding rule.

After playing with it for a little bit, I'm not convinced that it buys
enough performance win to be worth applying --- the restriction of cache
lifespan to one tuple cycle of a TupleTableSlot is awfully restrictive.
(For example, sorts that involve toasted sort keys continue to suck,
because the tuples being sorted aren't in Slots.) It would probably
fix the specific case that the PostGIS hackers were complaining of,
but I think we need something more.

Still, I wanted to get it into the archives because the idea of indirect
toast pointers might be useful for something else.

regards, tom lane

Re: [pgsql-es-ayuda] Postgres y jdbc

La pregrunta es si es posible de enviar md5 en una cadena de conección con postgresql-8.2-508.jdbc4?

----- Mensaje original ----
De: Neil Peter Braggio <pbraggio@gmail.com>
Para: lennin.caro@yahoo.com
CC: lista postgres <pgsql-es-ayuda@postgresql.org>; Edgar Enriquez <edgarpostgres@yahoo.es>
Enviado: sábado, 28 de junio, 2008 13:28:03
Asunto: Re: Postgres y jdbc

Me gustaría saber como haces para _desencriptar_ (descifrar es la
palabra correcta) una cadena de caracteres obtenida de la salida de
una función MD5.

Me explico mejor: quisiera saber como obtienes la cadena original a
partir de una cadena MD5 obtenida por una función MD5.

----
Neil Peter Braggio
pbraggio@gmail.com


On Sat, Jun 28, 2008 at 9:58 AM, Lennin Caro <lennin.caro@yahoo.com> wrote:
> puedes hacer que el password exista en un archivo de texto, este password lo
> puedes encriptar con md5 y desencriptar en tu aplicacion al momento de
> pasarlo al  getConnection.
>
> La otra opcion es que lo introduzcan desde una ventana o directamente como
> parametro al llamar la aplicacion..
>
> --- On Fri, 6/27/08, Edgar Enriquez <edgarpostgres@yahoo.es> wrote:
>
> From: Edgar Enriquez <edgarpostgres@yahoo.es>
> Subject: [pgsql-es-ayuda] Postgres y jdbc
> To: "lista postgres" <pgsql-es-ayuda@postgresql.org>
> Date: Friday, June 27, 2008, 1:37 PM
>
> Saludos amigos de la lista, yo tengo un problema y es que no se como hacer
> para conectar una aplicacion java a posgres utilizando jdbc,
> postgresql-8.2-508.jdbc4, sin tener nececidad de guardar el pasword del
> usuario, yo utilizo este codigo
>
> try{
>                Class.forName("org.postgresql.Driver");
>                con =
> DriverManager.getConnection("jdbc:postgresql://192.168..100.126:5432/Migration","migration","swing");
>
>                System.out.println("PostgreSQL OK");
>            }
>            catch(Exception e){
>                System.out.println(URL+" "+utilisateur+" "+motDePass);
>                  System.out.println("No se ha podido cargar el Driver
> PostgreSQL");
>                  erreur = 1;
>                }
>
> el problema es que estoy obligado de guardar el nombre de usuario
> "migration" y el password "swing" en la aplicacion, alguien tiene una
> sugerencia?
>
> Nota: esta es una aplicacion para poder migrar ciertos datos de una base a
> otra pero necesito que sea transparente al usuario, la base de datos
> migracion me sirve de control, osea yo no quiero que el usuario sepa que se
> está conectando a una base de datos donde estoy controlando lo que hace ya
> que trabajo con gente que quiere mostrar que pasar de multiples bases de
> datos access(una por cliente) a una sola base en posgres no es factible
>
>
> Gracias
>
> ________________________________
> Enviado desde Correo Yahoo!
> La bandeja de entrada más inteligente.
>
>



Enviado desde Correo Yahoo!
La bandeja de entrada más inteligente..

Re: [HACKERS] TODO item: Allow data to be pulled directly from indexes

Gregory Stark wrote:
>> (1) & (4) require an UPDATE or DELETE to twiddle the old index tuple. Tom has
>> noted (in the linked message) that this is not reliable if the index has any
>> expression-valued columns, because it is not always possible to find the old
>> index entry. For this reason, the proposed patch does not keep visibility
>> metadata for indexes on expressions. This seems like a reasonable limitation
>> --- indexed expressions are just less efficient.
>>
>
> Or if the index operators and btproc aren't nearly as immutable as they claim.
> Probably less likely than non-immutable index expressions but also possible.
>
>
Your point is well taken... I'll have to look into that more.

>> (2) & (3) can work for any index, and they are quite elegant in the way that
>> the overhead does not change with the number of indexes. The TODO also notes
>> the benefit of (2) for efficient vacuuming. Thus, I think that (2) is a great
>> idea in general, but it does not serve the intended purpose of this TODO item.
>> Once a page gets marked as requiring visibility checks, it cannot be unmarked
>> until the next VACUUM. The whole point of this feature is that we are willing
>> to be more proactive during updates in order to make index access more
>> efficient.
>>
>
> Well I think that's precisely the point. If you're trading off work done at
> update time against work done for index accesses then you're only going to win
> if the tuples are relatively static and have lots of accesses done against
> them between updates. In which case having the optimization only kick in when
> the page has been static for long enough that all the tuples are globally
> visible should be good enough
I really don't understand this point. The way I see the visibility map
working is as follows: we set a page to "requires visibility check" when
a tuple on the page is inserted, deleted, or non-HOT updated. If the
only modifications have been inserts, we can reset the status to "all
tuples visible" when these tuples become universally visible, which
matches your description. But in the presence of deletes and updates, we
can only reset the status of a page after a VACUUM (I know that dead HOT
tuples can be pruned without VACUUM, but I don't think that's the case
for indexed tuples). We can't reset the status earlier because we don't
know what indexes still have pointers to the dead tuples. So a page can
be static indefinitely (after a single modification) without ever
getting to enjoy the optimization.

This is a really important point, so please let me know if I'm missing
something.

Thanks for your response!
Karl

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Sources of information about sizing of hardwares to run PostgreSQL

On Fri, 27 Jun 2008, Sérgio R F Oliveira wrote:

> I need to specify servers and storage to run PostgreSQL. Does anyone
> know any source of information (articles, presentations, books, etc.)
> which describes methods of hardware sizing for running a large
> PostgreSLQ installation?

There aren't any, just a fair number of people who know how to do it and
some scattered bits of lore on the subject. The quickest way to get some
sort of estimate that is actually useful is to create a small prototype of
some tables you expect will be the larger ones for the application, load
some data into them, measure how big they are, and then extrapolate from
there. I'm dumping links and notes on the subject of measurements like
that http://wiki.postgresql.org/wiki/Disk_Usage that should get you
started with such a simulation.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] A guide/tutorial to performance monitoring and tuning

On Fri, 27 Jun 2008, Nikhil G. Daddikar wrote:

> I have been searching on the net on how to tune and monitor performance of my
> postgresql server but not met with success. A lot of information is vague and
> most often then not the answer is "it depends".

That's because it does depend. I collect up the best of resources out
there and keep track of them at

http://wiki.postgresql.org/wiki/Performance_Optimization so if you didn't
find that yet there's probably some good ones you missed.

Right now I'm working with a few other people to put together a more
straightforward single intro guide that should address some of the
vagueness you point out here, but that's still a few weeks away from being
ready.

Monitoring performance isn't really covered in any of this though. Right
now the best simple solution out there is probably Nagios with the
PostgreSQL plug-in.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] compiling, performance of PostGreSQL 8.3 on 64-bit processors

On Thu, 26 Jun 2008, Adam Rich wrote:

> Is there any benefit to running a 32-bit OS (rhel 5 in this case) on a
> server with more than 4 GB of memory?

If you have more than 3GB of memory, you should be using a 64-bit OS.
While theoretically the 32-bit code might be smaller which has some
advantages, in practice the 64-bit versions will be faster.

> For people with experience running postgresql on systems with 16+ GB of
> memory, what parameter settings have you found to be effective? (This
> would be a large database that's mostly read-only that we'd like to fit
> completely in memory)

Much larger values for shared_buffers and work_mem seem to be the most
effective way to use larger amounts of memory. For example, if you've got
1GB of RAM, it can be hard to allocate >15% of it to shared_buffers while
leaving enough enough RAM for OS-level operations, applications, etc.
But if you've got 16GB, a large read-only database might usefully set that
to 50% of RAM instead.

> Is it possible to backup (pg_dump) from a 32-bit OS to a 64-bit OS,
> or is a plain SQL dump necessary?

pg_dump is a plain SQL dump, it's just a program to make it easier to
generate them. You need to do this sort of dump/reload in order to
convert from a 32-bit to a 64-bit platform.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] TODO item: Allow data to be pulled directly from indexes

"Karl Schnaitter" <karlsch@soe.ucsc.edu> writes:

"Karl Schnaitter" <karlsch@soe.ucsc.edu> writes:

> (1) & (4) require an UPDATE or DELETE to twiddle the old index tuple. Tom has
> noted (in the linked message) that this is not reliable if the index has any
> expression-valued columns, because it is not always possible to find the old
> index entry. For this reason, the proposed patch does not keep visibility
> metadata for indexes on expressions. This seems like a reasonable limitation
> --- indexed expressions are just less efficient.

Or if the index operators and btproc aren't nearly as immutable as they claim.
Probably less likely than non-immutable index expressions but also possible.

> I should mention there is a major flaw in the patch, because it puts pointers
> to HOT tuples in the index, in order to capture the different transaction ids
> in the chain. I think this can be fixed by only pointing to the root of the HOT
> chain, and setting xmin/xmax to the entire range of transaction ids spanned by
> the chain. I'm not sure about all the details (the ctid and some other bits
> also need to be set).

I think you can think of a HOT chain as a single tuple. The xmin of the head
is the xmin of the chain and the xmax of the tail is the xmax of the chain.
The xmin/xmax of the intermediate versions are only interesting for
determining *which* of the HOT versions to look at, but the index pointer
points to the whole chain.

> (2) & (3) can work for any index, and they are quite elegant in the way that
> the overhead does not change with the number of indexes. The TODO also notes
> the benefit of (2) for efficient vacuuming. Thus, I think that (2) is a great
> idea in general, but it does not serve the intended purpose of this TODO item.
> Once a page gets marked as requiring visibility checks, it cannot be unmarked
> until the next VACUUM. The whole point of this feature is that we are willing
> to be more proactive during updates in order to make index access more
> efficient.

Well I think that's precisely the point. If you're trading off work done at
update time against work done for index accesses then you're only going to win
if the tuples are relatively static and have lots of accesses done against
them between updates. In which case having the optimization only kick in when
the page has been static for long enough that all the tuples are globally
visible should be good enough.

The case where index visibility info might win over a visibility map might be
if the tuples are being heavily updated by long-lived transactions. In which
case they never sit globally visible for very long but having the xmin/xmax in
the index might avoid having to do a heap access for tuples which haven't been
committed yet.

As you seem to realize there has been a lot of discussion in this area
already. The visibility map looks like a much more popular direction.


--
Gregory Stark
EnterpriseDB

http://www.enterprisedb.com

Ask me about EnterpriseDB's RemoteDBA services!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers