Monday, August 4, 2008

Re: [HACKERS] Automatic Client Failover

On Mon, 2008-08-04 at 22:56 -0400, Tom Lane wrote:
> Josh Berkus <josh@agliodbs.com> writes:
> > I think the proposal was for an extremely simple "works 75% of the time"
> > failover solution. While I can see the attraction of that, the
> > consequences of having failover *not* work are pretty severe.
>
> Exactly. The point of failover (or any other HA feature) is to get
> several nines worth of reliability. "It usually works" is simply
> not playing in the right league.

Why would you all presume that I haven't thought about the things you
mention? Where did I say "...and this would be the only feature required
for full and correct HA failover." The post is specifically about Client
Failover, as the title clearly states.

Your comments were illogical anyway, since if it was so bad a technique
then it would not work for pgpool either, since it is also a client. If
pgpool can do this, why can't another client? Why can't *all* clients?

With correctly configured other components the primary will shut down if
it is no longer the boss. The client will then be disconnected. If it
switches to its secondary connection, we can have an option to read
session_replication_role to ensure that this is set to origin. This
covers the case where the client has lost connection with primary,
though it is still up, yet can reach the standby which has not changed
state.

DB2, SQLServer and Oracle all provide this feature, BTW. We don't need
to follow, but we should do that consciously. I'm comfortable with us
deciding not to do it, if that is our considered judgement.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Mini improvement: statement_cost_limit

Simon Riggs wrote:
> On Sun, 2008-08-03 at 22:09 +0200, Hans-Jürgen Schönig wrote:
>
>>> Another alternative would be to have a plugin that can examine the
>>> plan
>>> immediately after planner executes, so you can implement this
>>> yourself,
>>> plus some other possibilities.
>>>
>
>> this would be really fancy.
>> how could a plugin like that look like?
>
> Hmm...thinks: exactly like the existing planner_hook().
>
> So, rewrite this as a planner hook and submit as a contrib module.

Now that's a good idea!

I personally don't think this feature is a good idea, for all the
reasons others have mentioned, but as a pgfoundry project it can be
downloaded by those who want it, and perhaps prove its usefulness for
others as well.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] NDirectFileRead and Write

Here is a patch to user NDirectFileRead/Write counters to get I/O counts
in BufFile module. We can see the counters when log_statement_stats is on.

The information is different from trace_sort; trace_sort shows used blocks
in external sort, and log_statement_stats shows how many I/Os are submitted
during sorts.

I wrote:
> I'd like to use NDirectFileRead and NDirectFileWrite statistics counters
> for counting reads and writes in BufFile. They are defined, but not used
> now. BufFile is used for tuple sorting or materializing, so we could use
> NDirectFileRead/Write to retrieve how many I/Os are done in temp tablespace.

=# SET client_min_messages = log;
=# SET trace_sort = on;
=# SET log_statement_stats = on;
=# EXPLAIN ANALYZE SELECT * FROM generate_series(1, 1000000) AS i ORDER BY i;
LOG: begin tuple sort: nkeys = 1, workMem = 1024, randomAccess = f
LOG: switching to external sort with 7 tapes: CPU 0.09s/0.26u sec elapsed 0.35 sec
LOG: performsort starting: CPU 0.48s/1.68u sec elapsed 2.20 sec
LOG: finished writing final run 1 to tape 0: CPU 0.48s/1.70u sec elapsed 2.21 sec
LOG: performsort done: CPU 0.48s/1.70u sec elapsed 2.21 sec
LOG: external sort ended, 2444 disk blocks used: CPU 0.79s/2.23u sec elapsed 3.06 sec
LOG: QUERY STATISTICS
DETAIL: ! system usage stats:
! 3.078000 elapsed 2.234375 user 0.812500 system sec
! [3.328125 user 1.281250 sys total]
! buffer usage stats:
! Shared blocks: 0 read, 0 written, buffer hit rate = 0.00%
! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%
! Direct blocks: 5375 read, 5374 written
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=62.33..64.83 rows=1000 width=4) (actual time=2221.485..2743.831 rows=1000000 loops=1)
Sort Key: i
Sort Method: external sort Disk: 19552kB
-> Function Scan on generate_series i (cost=0.00..12.50 rows=1000 width=4) (actual time=349.065..892.907 rows=1000000 loops=1)
Total runtime: 3087.305 ms
(5 rows)

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

PL/LOLCODE [was Re: [HACKERS] [PATCH] "\ef " in psql]

On Mon, Aug 04, 2008 at 10:31:10AM -0700, David Wheeler wrote:
> On Jul 31, 2008, at 00:07, Abhijit Menon-Sen wrote:
>
>> I have attached two patches:
>>
>> - funcdef.diff implements pg_get_functiondef()
>> - edit.diff implements "\ef function" in psql based on (1).
>>
>> Comments appreciated.
>
> +1
>
> I like! The ability to easily edit a function on the fly in psql
> will be very welcome to DBAs I know. And I like the
> pg_get_functiondef() function, too, a that will simplify editing
> existing functions in other admin apps, like pgAdmin.
>
> I'm starting to get really excited for 8.4. I can haz cheezburger?

You do understand you've just kicked off a discussion of shipping
PL/LOLCODE by default.

> Oops, I mean, when does it ship? ;-P

Christmas ;)

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Vacuum Vs Vacuum Full

Hi,

I've been trying to get to the bottom of the differences between a vacuum and a vacuum full, it seems to me that the difference is that a vacuum full also recovers disk space(and locks things making it less than useful on production servers).  But I believe that both will fix the transaction ID(example message below).

"WARNING: database "mydb" must be vacuumed within 177009986 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in "mydb"."
Which is reason I ask the question, is full vacuum backup useful for anything other than reclaiming disk space.

On a side note, we doubled our page slots, but they ran out much faster(of course) than we thought, is there a good sql statement that can tell you what your current transaction ID is?

Thanks in advance.

Cheers,
Rob


Find out: SEEK Salary Centre Are you paid what you're worth?

Re: [PERFORM] file system and raid performance

I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,
and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used
bonnie++, so the numbers are really only useful for my hardware.

What parameters were used to create the XFS partition in these tests? And,
what options were used to mount the file system? Was the kernel 32-bit or
64-bit? Given what I've seen with some of the XFS options (like lazy-count),
I am wondering about the options used in these tests.

Thanks,
Greg


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] DROP DATABASE always seeing database in use

Jens-Wolfhard Schicke <drahflow@gmx.de> writes:
> Tom Lane wrote:
>> ERROR: database "%s" is being accessed by other users
>> DETAIL: There are %d session(s) and %d prepared transaction(s) using the database.
>>
>> I'm aware that this phrasing might not translate very nicely ... anyone
>> have a suggestion for better wording?

> I can only estimate translation effort into German, but how about:

> DETAIL: Active users of the database: %d session(s), %d prepared transaction(s)

Hmmm ... what I ended up committing was code that special-cased the
common cases where you only have one or the other, ie

/*
* We don't worry about singular versus plural here, since the English
* rules for that don't translate very well. But we can at least avoid
* the case of zero items.
*/
if (notherbackends > 0 && npreparedxacts > 0)
errdetail("There are %d other session(s) and %d prepared transaction(s) using the database.",
notherbackends, npreparedxacts);
else if (notherbackends > 0)
errdetail("There are %d other session(s) using the database.",
notherbackends);
else
errdetail("There are %d prepared transaction(s) using the database.",
npreparedxacts);

Your proposal seems fine for the first case but a bit stilted for the
other two. Or maybe that's just me.

Of course, we don't *have* to do it as above at all, if "0 prepared
transactions" doesn't bother people.

Ideas anybody?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] bytea encode performance issues

Results below:

> ... but given that, I wonder whether the cost isn't from fetching
> the toasted messageblk data, and nothing directly to do with either
> the encode() call or the ~~ test. It would be interesting to compare
> the results of
>
> explain analyze select encode(messageblk, 'escape') ~~ '%Yossi%'
> from dbmail_messageblks where is_header = 0;
>
"Seq Scan on dbmail_messageblks (cost=0.00..38449.06 rows=162096
width=756) (actual time=0.071..492776.008 rows=166748 loops=1)"
" Filter: (is_header = 0)"
"Total runtime: 492988.410 ms"


> explain analyze select encode(messageblk, 'escape')
> from dbmail_messageblks where is_header = 0;
>
"Seq Scan on dbmail_messageblks (cost=0.00..38043.81 rows=162096
width=756) (actual time=16.008..306408.633 rows=166750 loops=1)"
" Filter: (is_header = 0)"
"Total runtime: 306585.369 ms"

> explain analyze select messageblk = 'X'
> from dbmail_messageblks where is_header = 0;
>
"Seq Scan on dbmail_messageblks (cost=0.00..38043.81 rows=162096
width=756) (actual time=18.169..251212.223 rows=166754 loops=1)"
" Filter: (is_header = 0)"
"Total runtime: 251384.900 ms"

> explain analyze select length(messageblk)
> from dbmail_messageblks where is_header = 0;
>
"Seq Scan on dbmail_messageblks (cost=0.00..38043.81 rows=162096
width=756) (actual time=20.436..2585.098 rows=166757 loops=1)"
" Filter: (is_header = 0)"
"Total runtime: 2673.840 ms"


> (length is chosen with malice aforethought: unlike the other cases,
> it doesn't require detoasting a toasted input)
>
> regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [JDBC] macaddr data type and prepared statements

On Mon, 4 Aug 2008, Steve Foster wrote:

> I'm trying to bulk load some MAC addresses using a prepared statement. But I
> keep on getting an error about incorrect datatype (complains that I'm trying
> to insert "character varying"). Bellow is an example of the code that I'm
> using:
>
> stmt.setString(3, line[2]);
>

Don't use setString for non-string types. With a recent JDBC driver you
should use setObject(3, line[2], Types.OTHER);

Kris Jurka


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

[COMMITTERS] pgsql: Fix some message style guideline violations in pg_regress, as

Log Message:
-----------
Fix some message style guideline violations in pg_regress, as well as
some failures to expose messages for translation.

Modified Files:
--------------
pgsql/src/test/regress:
pg_regress.c (r1.46 -> r1.47)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/test/regress/pg_regress.c?r1=1.46&r2=1.47)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [PERFORM] file system and raid performance

On Mon, 4 Aug 2008, Mark Wong wrote:

> Hi all,
>
> We've thrown together some results from simple i/o tests on Linux
> comparing various file systems, hardware and software raid with a
> little bit of volume management:
>
> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide
>
> What I'd like to ask of the folks on the list is how relevant is this
> information in helping make decisions such as "What file system should
> I use?" "What performance can I expect from this RAID configuration?"
> I know these kind of tests won't help answer questions like "Which
> file system is most reliable?" but we would like to be as helpful as
> we can.
>
> Any suggestions/comments/criticisms for what would be more relevant or
> interesting also appreciated. We've started with Linux but we'd also
> like to hit some other OS's. I'm assuming FreeBSD would be the other
> popular choice for the DL-380 that we're using.
>
> I hope this is helpful.

it's definantly timely for me (we were having a spirited 'discussion' on
this topic at work today ;-)

what happened with XFS?

you show it as not completing half the tests in the single-disk table and
it's completly missing from the other ones.

what OS/kernel were you running?

if it was linux, which software raid did you try (md or dm) did you use
lvm or raw partitions?

David Lang

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[PERFORM] file system and raid performance

Hi all,

We've thrown together some results from simple i/o tests on Linux
comparing various file systems, hardware and software raid with a
little bit of volume management:

http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide

What I'd like to ask of the folks on the list is how relevant is this
information in helping make decisions such as "What file system should
I use?" "What performance can I expect from this RAID configuration?"
I know these kind of tests won't help answer questions like "Which
file system is most reliable?" but we would like to be as helpful as
we can.

Any suggestions/comments/criticisms for what would be more relevant or
interesting also appreciated. We've started with Linux but we'd also
like to hit some other OS's. I'm assuming FreeBSD would be the other
popular choice for the DL-380 that we're using.

I hope this is helpful.

Regards,
Mark

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[HACKERS] Reliability of CURRVAL in a RULE

Is the use of CURRVAL in this example reliable in heavy use?

CREATE TABLE users (
id SERIAL NOT NULL,
email VARCHAR(24) DEFAULT NULL,
PRIMARY KEY (id)
);
CREATE TABLE users_with_email (
id INTEGER NOT NULL
);
CREATE RULE add_email AS ON INSERT TO users WHERE (NEW.email IS NULL)
DO INSERT INTO users_with_email (id) VALUES (CURRVAL('users_id_seq'));

I tried...

CREATE RULE add_email AS ON INSERT TO users WHERE (NEW.email IS NULL)
DO INSERT INTO users_with_email (id) VALUES (NEW.id);

which was incrementing the sequence twice. Should I be using a trigger
instead? This rule seems quite simple and easy enough... if reliable. -
Nick

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] eliminating records not in (select id ... so SLOW?

On Fri, 01 Aug 2008 10:33:59 -0400
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Ivan Sergio Borgonovo <mail@webthatworks.it> writes:
> > Well I reached 3Gb of work_mem and still I got:
>
> > "Seq Scan on catalog_categoryitem (cost=31747.84..4019284477.13
> > rows=475532 width=6)"
> > " Filter: (NOT (subplan))"
> > " SubPlan"
> > " -> Materialize (cost=31747.84..38509.51 rows=676167
> > width=8)" " -> Seq Scan on catalog_items
> > (cost=0.00..31071.67 rows=676167 width=8)"
>
> Huh. The only way I can see for that to happen is if the datatypes
> involved aren't hashable. What's the datatypes of the two columns
> being compared, anyway?

I changed both columns to bigint.
I added 2 indexes on the ItemID column of both tables and increased
work_mem to 3Gb [sic].
The query got executed in ~1300ms... but explain gave the same
output as the one above.

The problem is solved... but curious mind want to know.

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgadmin-hackers] pgScript patch

On Mon, Aug 4, 2008 at 11:03 AM, Dave Page <dpage@pgadmin.org> wrote:

> I'll leave it at that for now, and look forward to the next patch :-)

Forgot to mention - we'll also need a doc patch when this is applied.
Probably some tweaks to the query to page, and a reformated version of
your syntax reference.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[GENERAL] Fwd: Returning Cursor

Hello,

I am a developer working on postgres. I just wrote a function which ll return a refcurosor as shown below.



CREATE OR REPLACE FUNCTION reffunc(refcursor)
  RETURNS refcursor AS
$BODY$
BEGIN
    OPEN $1 FOR SELECT * FROM SAM1;
    RETURN $1;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;


i have problems accessing this function from my middle tier i.e VC++.

I wrote a VC statement to retrieve values from this refcursor using a record set. I cant access any of the values that the select statement in the function should retrieve.When we executed the above function from VC we only got the cursor name. We've been trying to access the values for the past one week.  Can you please help me by sending me a sample code as to how to get the values in a recordset using this refcursor. Please do reply. This is very urgent.

Thanks and regards
Ravi Kiran L


[GENERAL] Fwd: Returning Cursor

Hello,

I am a developer working on postgres. I just wrote a function which ll return a refcurosor as shown below.



CREATE OR REPLACE FUNCTION reffunc(refcursor)
  RETURNS refcursor AS
$BODY$
BEGIN
    OPEN $1 FOR SELECT * FROM SAM1;
    RETURN $1;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;


i have problems accessing this function from my middle tier i.e VC++.

I wrote a VC statement to retrieve values from this refcursor using a record set. I cant access any of the values that the select statement in the function should retrieve.When we executed the above function from VC we only got the cursor name. We've been trying to access the values for the past one week.  Can you please help me by sending me a sample code as to how to get the values in a recordset using this refcursor. Please do reply. This is very urgent.

Thanks and regards
Ravi Kiran L

[GENERAL] Returning Cursor

Hello,

I am a developer working on postgres. I just wrote a function which ll return a refcurosor as shown below.



CREATE OR REPLACE FUNCTION reffunc(refcursor)
  RETURNS refcursor AS
$BODY$
BEGIN
    OPEN $1 FOR SELECT * FROM SAM1;
    RETURN $1;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;


i have problems accessing this function from my middle tier i.e VC++.

I wrote a VC statement to retrieve values from this refcursor using a record set. I cant access any of the values that the select statement in the function should retrieve.When we executed the above function from VC we only got the cursor name. We've been trying to access the values for the past one week.  Can you please help me by sending me a sample code as to how to get the values in a recordset using this refcursor. Please do reply. This is very urgent.

Thanks and regards
Ravi Kiran L

[COMMITTERS] pgbouncer - pgbouncer: mention log msg cleanup

Log Message:
-----------
mention log msg cleanup

Modified Files:
--------------
pgbouncer:
NEWS (r1.26 -> r1.27)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/NEWS.diff?r1=1.26&r2=1.27)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-es-ayuda] conectar desde Java

Edgar Enriquez escribió:
>
>
> yo también nececito crear una aplicación java para conectarme a una
> BDD postgres, actualmente tengo todo sobre JDBC, pero para utilizar
> crear un servidor me digeron que nececito instalar Glassfish y crear
> allí las conexiones pero luego se habla de JPA, Hibernate y toplink
> (que aparentementa hacen lo mismo) pero al final la conexión la
> termina haciento el JDBC de postgres, alguien sabe cual es la
> diferencia? porque además parece que glassfish maneja la concurrencia
> (algo que tradicionalmente se hace en Postgres)
>
> Saludos a todos y gracia por sus respuestas
>
> ----- Mensaje original ----
> De: Marco Castillo <mabcastillo@gmail.com>
> Para: "pgsql-es-ayuda@postgresql..org" <pgsql-es-ayuda@postgresql.org>
> Enviado: viernes, 1 de agosto, 2008 21:04:54
> Asunto: Re: [pgsql-es-ayuda] conectar desde Java
>
> Pues la idea del foro es aprender y ayudarnos mutuamente (mi
> percepción personal). Aca habemos varios que trabajamos en Java y en
> PostgreSQL. Haz tus preguntas aca y te echamos una mano.
>
> Saludos
>
> Marco
>
> 2008/8/1 Fabio Arias <fharias@gmail.com <mailto:fharias@gmail.com>>
>
> Cualquier cosa que necesitas sobre java+postgresql me escribes con
> mucho gusto te ayudaré
>
> Bye
>
> El 1 de agosto de 2008 9:50, Gabriel
> Ferro<gabrielrferro@yahoo.com.ar
> <mailto:gabrielrferro@yahoo.com.ar>>escribió:
>
> ok, mil gracias a todo. logre hacerlo, aunque me cuesta,
> considerando que no se nada de java y soy de la vieja escuela
> donde objetos y clases no existian.
> ¿alguien conoce una lista buena de java+postgre en español?
>
> ------------------------------------------------------------------------
>
> ¡Buscá desde tu celular! Yahoo! oneSEARCH ahora está en Claro
> http://ar.mobile.yahoo.com/onesearch
>
>
>
>
> --
> Fabio Hernando Arias Vera
> Cel. 314 411 7776
>
>
>
> ------------------------------------------------------------------------
>
> Enviado desde Correo Yahoo!
> <http://us.rd.yahoo.com/mailuk/taglines/isp/control/*http://us.rd.yahoo.com/evt=52431/*http://es.docs.yahoo.com/mail/overview/index.html>
> La bandeja de entrada más inteligente.
Glassfish , te sirve si lo usas web, para una aplicacion escritorio usas
directamente el jdbc, yo me hice una clase que maneja basede datos y
accedo a traves de ella a cualquier bd, (odbc ó jdbc).
Saludos Fernando
--
TIP 8: explain analyze es tu amigo

[COMMITTERS] pgbouncer - pgbouncer: v1.2.1

Log Message:
-----------
v1.2.1

Modified Files:
--------------
pgbouncer:
NEWS (r1.25 -> r1.26)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/NEWS.diff?r1=1.25&r2=1.26)
configure.ac (r1.40 -> r1.41)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/configure.ac.diff?r1=1.40&r2=1.41)
pgbouncer/debian:
changelog (r1.14 -> r1.15)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/debian/changelog.diff?r1=1.14&r2=1.15)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgbouncer - pgbouncer: wording cleanup for drop_on_error

Log Message:
-----------
wording cleanup for drop_on_error

Modified Files:
--------------
pgbouncer/doc:
config.txt (r1.11 -> r1.12)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/doc/config.txt.diff?r1=1.11&r2=1.12)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [SQL] return setof record - strange behavior

Dzieki za odpowiedz. Ciekawe ze funkcja SQL dziala bez problemu - ale
tu juz trzeba wskazac parametry OUT.

Thanks for your answer. It's curious that SQL function works as
expected - but requires OUT params.

pozdrowienia/regards
mk


2008/8/4 Pawel Socha <pawel.socha@gmail.com>:
>
>
> 2008/8/4 Marcin Krawczyk <jankes.mk@gmail.com>
>>
>> Hi everybody. Can anyone enlighten me what's wrong with this function :
>>
>> CREATE OR REPLACE FUNCTION month_year(mon integer, intv integer, OUT
>> ro integer, OUT mi integer)
>> RETURNS SETOF record AS
>> $BODY$
>> DECLARE
>> w record;
>> cy integer := EXTRACT (YEAR FROM current_date);
>>
>> BEGIN
>>
>> FOR w IN
>> SELECT (CASE WHEN m > 12 THEN cy + 1 ELSE cy END)::integer, (CASE
>> WHEN m > 12 THEN m - 12 ELSE m END)::integer
>> FROM generate_series(mon + 1, mon + intv) AS m
>> LOOP
>> RETURN next;
>> END LOOP;
>>
>> END;
>>
>> $BODY$
>> LANGUAGE 'plpgsql' VOLATILE;
>>
>>
>> SELECT * FROM month_year(10, 5);
>>
>> Why does it return empty SET ? The amount of rows is correct though ....
>> I'm running 8.1.4
>>
>> regards
>> mk
>>
>> --
>> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-sql
>
> Hi
>
> merlin=# CREATE OR REPLACE FUNCTION month_year(mon integer, intv integer)
> RETURNS SETOF record AS
> $BODY$
> DECLARE
> w record;
> cy integer := EXTRACT (YEAR FROM current_date);
> BEGIN
> FOR w IN
> SELECT (CASE WHEN m > 12 THEN cy + 1 ELSE cy END)::integer, (CASE
> WHEN m > 12 THEN m - 12 ELSE m END)::integer
> FROM generate_series(mon + 1, mon + intv) AS m
> LOOP
> RETURN next w;
> END LOOP;
> END;
> $BODY$
> LANGUAGE 'plpgsql' VOLATILE;
>
> and
>
> merlin=# SELECT * FROM month_year(10, 5) as (x integer, y integer);
> x | y
> ------+----
> 2008 | 11
> 2008 | 12
> 2009 | 1
> 2009 | 2
> 2009 | 3
> (5 rows)
>
>
> without output params
>
>
> --
> --
> Serdecznie pozdrawiam
>
> Pawel Socha
> pawel.socha@gmail.com
>
> programista/administrator
>
> perl -le 's**02).4^&-%2,).^9%4^!./4(%2^3,!#+7!2%^53%2&**y%& -;^[%"`-{
> a%%s%%$_%ee'
>

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [PERFORM] SSD Performance Article

Scott Marlowe wrote:
> On Thu, Jul 31, 2008 at 11:45 AM, Matthew T. O'Connor <matthew@zeut.net> wrote:
>
>> Interesting read...
>>
>> http://www.linux.com/feature/142658
>>
>
> Wish he had used a dataset larger than 1G...
>
>
Wish he had performed a test with the index on a dedicated SATA.

HH

--
H. Hall
ReedyRiver Group LLC
http://www.reedyriver.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[COMMITTERS] pgbouncer - pgbouncer: cancel shutdown on resume otherwise admin gets

Log Message:
-----------
cancel shutdown on resume

otherwise admin gets bad surprise on next pause

Modified Files:
--------------
pgbouncer/src:
admin.c (r1.37 -> r1.38)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/admin.c.diff?r1=1.37&r2=1.38)
main.c (r1.45 -> r1.46)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/main.c.diff?r1=1.45&r2=1.46)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[BUGS] BUG #4339: The postgreSQL service stops abnormally

The following bug has been logged online:

Bug reference: 4339
Logged by: Bhaskar Sirohi
Email address: bhaskar.sirohi@druvaa.com
PostgreSQL version: 8.3.3
Operating system: Windows 2003 Server
Description: The postgreSQL service stops abnormally
Details:

Hi All,

The postgreSQL service stops abnormally,I can't restart it until I enter the
password for the \postgre login account. Once I do that, everything is fine
again.

Below are the snaps of pg_logs

2008-07-29 09:14:46 EDT LOG: database system was interrupted; last known up
at 2008-07-28 23:13:20 EDT
2008-07-29 09:14:46 EDT LOG: database system was not properly shut down;
automatic recovery in progress
2008-07-29 09:14:46 EDT LOG: record with zero length at 2/D0E47B88
2008-07-29 09:14:46 EDT LOG: redo is not required
2008-07-29 09:14:46 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 09:14:46 EDT FATAL: the database system is starting up
2008-07-29 09:14:46 EDT LOG: database system is ready to accept
connections
2008-07-29 09:14:46 EDT LOG: autovacuum launcher started
2008-07-29 09:14:47 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 09:15:29 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 16:26:19 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 16:41:03 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 16:50:57 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 16:51:27 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-29 17:30:13 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"


2008-07-30 03:03:44 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-30 05:35:15 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-30 10:27:35 EDT LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-07-30 15:05:01 EDT LOG: checkpoints are occurring too frequently (28
seconds apart)
2008-07-30 15:05:01 EDT HINT: Consider increasing the configuration
parameter "checkpoint_segments".
2008-07-30 15:13:34 EDT LOG: checkpoints are occurring too frequently (29
seconds apart)
2008-07-30 15:13:34 EDT HINT: Consider increasing the configuration
parameter "checkpoint_segments".
2008-07-30 15:18:50 EDT LOG: checkpoints are occurring too frequently (28
seconds apart)
2008-07-30 15:18:50 EDT HINT: Consider increasing the configuration
parameter "checkpoint_segments".
2008-07-30 15:19:21 EDT LOG: received fast shutdown request
2008-07-30 15:19:21 EDT LOG: aborting any active transactions
2008-07-30 15:19:21 EDT ERROR: canceling statement due to user request
2008-07-30 15:19:21 EDT STATEMENT: COMMIT
2008-07-30 15:19:21 EDT ERROR: canceling statement due to user request
2008-07-30 15:19:21 EDT STATEMENT: ROLLBACK
2008-07-30 15:19:21 EDT ERROR: current transaction is aborted, commands
ignored until end of transaction block
2008-07-30 15:19:21 EDT STATEMENT: SELECT type, cino, ctime FROM folder
WHERE ino = 2 AND name = 'Michael H. Modee' AND dtime = 0
2008-07-30 15:19:21 EDT ERROR: current transaction is aborted, commands
ignored until end of transaction block
2008-07-30 15:19:21 EDT STATEMENT: SELECT type, cino, ctime FROM folder
WHERE ino = 2 AND name = 'Michael H. Modee' AND dtime = 0
2008-07-30 15:19:21 EDT ERROR: canceling autovacuum task
2008-07-30 15:19:21 EDT CONTEXT: automatic analyze of table
"notebookbkp.public.bmap"
2008-07-30 15:19:21 EDT FATAL: terminating connection due to administrator
command
2008-07-30 15:19:21 EDT FATAL: terminating connection due to administrator
command
2008-07-30 15:19:21 EDT LOG: autovacuum launcher shutting down
2008-07-30 15:19:24 EDT LOG: shutting down
2008-07-30 15:19:24 EDT LOG: database system is shut down

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [pgadmin-hackers] Support for integrated tsearch configuration

On Fri, Aug 1, 2008 at 11:05 AM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:
>
> I worked three days on it. The fourth was more about testing it on different
> platforms (GTK, Windows, Mac). Now, It's working. I don't attach the patch
> because it's really big, but here is a URL to get it compressed:

Cool :-). The usual list of random thoughts....

- Should we call objects 'FTS xxxx'? All the 'Text Search xxxx' labels
look a little long.

- There are some tokens to add to the ctlSQLBox list - at least
GETTOKEN, LEXTYPES, HEADLINE, INIT, LEXIZE

- There is a little inconsistency in the RE-SQL formatting - for a
template for example we have:

CREATE TEXT SEARCH TEMPLATE fred (
INIT = dsimple_init,
LEXIZE = dsimple_lexize);

and for a dictionary:

CREATE TEXT SEARCH DICTIONARY fred (
TEMPLATE = "simple"
);

Note the ); position.

- I got a crash when trying to create a config with no tokens.

0 pgAdmin3-Debug 0x00021b63
wxArrayString::GetCount() const + 9 (arrstr.h:144)
1 pgAdmin3-Debug 0x000fed06
dlgTextSearchConfiguration::GetSql() + 1634
(dlgTextSearchConfiguration.cpp:346)
2 pgAdmin3-Debug 0x000cbf7f
dlgProperty::OnOK(wxCommandEvent&) + 335 (dlgProperty.cpp:759)
...

- The Dictionaries textbox is oddly sized on the Tokens tab of the
Configuration.

- The dialogue boxes default to different sizes. They should all be
consistently sized.

- Don't forget to add new headers to precomp.h.

I only gave the code a cusory glance - you've got lot's of pgAdmin
experience now so I trust that it's all as clean as the bits I looked
at :-)

Overall, looks pretty good :-)

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[COMMITTERS] pgbouncer - pgbouncer: exit immediately on SIGINT if suspend was in

Log Message:
-----------
exit immediately on SIGINT if suspend was in progress

Modified Files:
--------------
pgbouncer/src:
main.c (r1.44 -> r1.45)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/main.c.diff?r1=1.44&r2=1.45)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [SQL] return setof record - strange behavior



2008/8/4 Marcin Krawczyk <jankes.mk@gmail.com>
Hi everybody. Can anyone enlighten me what's wrong with this function :

CREATE OR REPLACE FUNCTION month_year(mon integer, intv integer, OUT
ro integer, OUT mi integer)
 RETURNS SETOF record AS
$BODY$
DECLARE
w       record;
cy      integer := EXTRACT (YEAR FROM current_date);

BEGIN

FOR w IN
       SELECT (CASE WHEN  m > 12 THEN cy + 1 ELSE cy END)::integer, (CASE
WHEN  m > 12 THEN m - 12 ELSE m END)::integer
       FROM generate_series(mon + 1, mon + intv) AS m
LOOP
       RETURN next;
END LOOP;

END;

$BODY$
 LANGUAGE 'plpgsql' VOLATILE;


SELECT * FROM month_year(10, 5);

Why does it return empty SET ? The amount of rows is correct though ....
I'm running 8.1.4

regards
mk

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Hi

merlin=# CREATE OR REPLACE FUNCTION month_year(mon integer, intv integer)
 RETURNS SETOF record AS
$BODY$
DECLARE
w       record;
cy      integer := EXTRACT (YEAR FROM current_date);
BEGIN
FOR w IN
       SELECT (CASE WHEN  m > 12 THEN cy + 1 ELSE cy END)::integer, (CASE
WHEN  m > 12 THEN m - 12 ELSE m END)::integer
       FROM generate_series(mon + 1, mon + intv) AS m
LOOP
       RETURN next w;
END LOOP;
END;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE;

and

merlin=# SELECT * FROM month_year(10, 5) as (x integer, y integer);
  x   | y
------+----
 2008 | 11
 2008 | 12
 2009 |  1
 2009 |  2
 2009 |  3
(5 rows)


without output params


--
--
Serdecznie pozdrawiam

Pawel Socha
pawel.socha@gmail.com

programista/administrator

perl -le 's**02).4^&-%2,).^9%4^!./4(%2^3,!#+7!2%^53%2&**y%& -;^[%"`-{ a%%s%%$_%ee'

[HACKERS] DROP DATABASE always seeing database in use

It seems there's something wrong with CheckOtherDBBackends() but I haven't
exactly figured out what. There are no other sessions but drop database keeps
saying "regression" is being accessed by other users. I do see Autovacuum
touching tables in regression but CheckOtherDBBackends() is supposed to send
it a sigkill if it finds it and it doesn't seem to be doing so.

I've been hacking on unrelated stuff in this database and have caused multiple
core dumps and autovacuum is finding orphaned temp tables. It's possible some
state is corrupted in some way here but I don't see what.


postgres=# select * from pg_stat_activity;
datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port
-------+----------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+-------------
11505 | postgres | 5616 | 10 | stark | select * from pg_stat_activity; | f | 2008-08-04 11:46:05.438479+01 | 2008-08-04 11:46:05.438956+01 | 2008-08-04 11:45:19.827702+01 | | -1
(1 row)

postgres=# commit;
COMMIT

postgres=# drop database regression;
ERROR: 55006: database "regression" is being accessed by other users
LOCATION: dropdb, dbcommands.c:678


select * from pg_stat_activity;
postgres=# datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port
-------+----------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+-------------
11505 | postgres | 5616 | 10 | stark | select * from pg_stat_activity; | f | 2008-08-04 11:46:45.619642+01 | 2008-08-04 11:46:45.620115+01 | 2008-08-04 11:45:19.827702+01 | | -1
(1 row)


--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgbouncer - pgbouncer: suspend_socket_list can drop sockets, so needs

Log Message:
-----------
suspend_socket_list can drop sockets, so needs _safe

Modified Files:
--------------
pgbouncer/src:
janitor.c (r1.29 -> r1.30)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/janitor.c.diff?r1=1.29&r2=1.30)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Location for pgstat.stat

Index: backend/postmaster/pgstat.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v
retrieving revision 1.176
diff -c -r1.176 pgstat.c
*** backend/postmaster/pgstat.c 30 Jun 2008 10:58:47 -0000 1.176
--- backend/postmaster/pgstat.c 4 Aug 2008 09:39:23 -0000
***************
*** 67,74 ****
* Paths for the statistics files (relative to installation's $PGDATA).
* ----------
*/
! #define PGSTAT_STAT_FILENAME "global/pgstat.stat"
! #define PGSTAT_STAT_TMPFILE "global/pgstat.tmp"

/* ----------
* Timer definitions.
--- 67,76 ----
* Paths for the statistics files (relative to installation's $PGDATA).
* ----------
*/
! #define PGSTAT_STAT_PERMANENT_FILENAME "global/pgstat.stat"
! #define PGSTAT_STAT_PERMANENT_TMPFILE "global/pgstat.tmp"
! #define PGSTAT_STAT_FILENAME "pgstat_tmp/pgstat.stat"
! #define PGSTAT_STAT_TMPFILE "pgstat_tmp/pgstat.tmp"

/* ----------
* Timer definitions.
***************
*** 218,225 ****
static void pgstat_beshutdown_hook(int code, Datum arg);

static PgStat_StatDBEntry *pgstat_get_db_entry(Oid databaseid, bool create);
! static void pgstat_write_statsfile(void);
! static HTAB *pgstat_read_statsfile(Oid onlydb);
static void backend_read_statsfile(void);
static void pgstat_read_current_status(void);

--- 220,227 ----
static void pgstat_beshutdown_hook(int code, Datum arg);

static PgStat_StatDBEntry *pgstat_get_db_entry(Oid databaseid, bool create);
! static void pgstat_write_statsfile(bool permanent);
! static HTAB *pgstat_read_statsfile(Oid onlydb, bool permanent);
static void backend_read_statsfile(void);
static void pgstat_read_current_status(void);

***************
*** 509,514 ****
--- 511,517 ----
pgstat_reset_all(void)
{
unlink(PGSTAT_STAT_FILENAME);
+ unlink(PGSTAT_STAT_PERMANENT_FILENAME);
}

#ifdef EXEC_BACKEND
***************
*** 2595,2601 ****
* zero.
*/
pgStatRunningInCollector = true;
! pgStatDBHash = pgstat_read_statsfile(InvalidOid);

/*
* Setup the descriptor set for select(2). Since only one bit in the set
--- 2598,2604 ----
* zero.
*/
pgStatRunningInCollector = true;
! pgStatDBHash = pgstat_read_statsfile(InvalidOid, true);

/*
* Setup the descriptor set for select(2). Since only one bit in the set
***************
*** 2635,2641 ****
if (!PostmasterIsAlive(true))
break;

! pgstat_write_statsfile();
need_statwrite = false;
need_timer = true;
}
--- 2638,2644 ----
if (!PostmasterIsAlive(true))
break;

! pgstat_write_statsfile(false);
need_statwrite = false;
need_timer = true;
}
***************
*** 2803,2809 ****
/*
* Save the final stats to reuse at next startup.
*/
! pgstat_write_statsfile();

exit(0);
}
--- 2806,2812 ----
/*
* Save the final stats to reuse at next startup.
*/
! pgstat_write_statsfile(true);

exit(0);
}
***************
*** 2891,2897 ****
* ----------
*/
static void
! pgstat_write_statsfile(void)
{
HASH_SEQ_STATUS hstat;
HASH_SEQ_STATUS tstat;
--- 2894,2900 ----
* ----------
*/
static void
! pgstat_write_statsfile(bool permanent)
{
HASH_SEQ_STATUS hstat;
HASH_SEQ_STATUS tstat;
***************
*** 2901,2917 ****
PgStat_StatFuncEntry *funcentry;
FILE *fpout;
int32 format_id;

/*
* Open the statistics temp file to write out the current values.
*/
! fpout = fopen(PGSTAT_STAT_TMPFILE, PG_BINARY_W);
if (fpout == NULL)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not open temporary statistics file \"%s\": %m",
! PGSTAT_STAT_TMPFILE)));
return;
}

--- 2904,2922 ----
PgStat_StatFuncEntry *funcentry;
FILE *fpout;
int32 format_id;
+ const char *tmpfile = permanent?PGSTAT_STAT_PERMANENT_TMPFILE:PGSTAT_STAT_TMPFILE;
+ const char *statfile = permanent?PGSTAT_STAT_PERMANENT_FILENAME:PGSTAT_STAT_FILENAME;

/*
* Open the statistics temp file to write out the current values.
*/
! fpout = fopen(tmpfile, PG_BINARY_W);
if (fpout == NULL)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not open temporary statistics file \"%s\": %m",
! tmpfile)));
return;
}

***************
*** 2978,3002 ****
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not write temporary statistics file \"%s\": %m",
! PGSTAT_STAT_TMPFILE)));
fclose(fpout);
! unlink(PGSTAT_STAT_TMPFILE);
}
else if (fclose(fpout) < 0)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not close temporary statistics file \"%s\": %m",
! PGSTAT_STAT_TMPFILE)));
! unlink(PGSTAT_STAT_TMPFILE);
}
! else if (rename(PGSTAT_STAT_TMPFILE, PGSTAT_STAT_FILENAME) < 0)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not rename temporary statistics file \"%s\" to \"%s\": %m",
! PGSTAT_STAT_TMPFILE, PGSTAT_STAT_FILENAME)));
! unlink(PGSTAT_STAT_TMPFILE);
}
}

--- 2983,3007 ----
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not write temporary statistics file \"%s\": %m",
! tmpfile)));
fclose(fpout);
! unlink(tmpfile);
}
else if (fclose(fpout) < 0)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not close temporary statistics file \"%s\": %m",
! tmpfile)));
! unlink(tmpfile);
}
! else if (rename(tmpfile, statfile) < 0)
{
ereport(LOG,
(errcode_for_file_access(),
errmsg("could not rename temporary statistics file \"%s\" to \"%s\": %m",
! tmpfile, statfile)));
! unlink(tmpfile);
}
}

***************
*** 3006,3015 ****
*
* Reads in an existing statistics collector file and initializes the
* databases' hash table (whose entries point to the tables' hash tables).
* ----------
*/
static HTAB *
! pgstat_read_statsfile(Oid onlydb)
{
PgStat_StatDBEntry *dbentry;
PgStat_StatDBEntry dbbuf;
--- 3011,3025 ----
*
* Reads in an existing statistics collector file and initializes the
* databases' hash table (whose entries point to the tables' hash tables).
+ *
+ * If reading from the permanent file (which happens during collector
+ * startup, but never from backends), the file is removed once it's been
+ * successfully read. The temporary file is also removed at this time,
+ * to make sure backends don't read data from previous runs.
* ----------
*/
static HTAB *
! pgstat_read_statsfile(Oid onlydb, bool permanent)
{
PgStat_StatDBEntry *dbentry;
PgStat_StatDBEntry dbbuf;
***************
*** 3024,3029 ****
--- 3034,3040 ----
FILE *fpin;
int32 format_id;
bool found;
+ const char *statfile = permanent?PGSTAT_STAT_PERMANENT_FILENAME:PGSTAT_STAT_FILENAME;

/*
* The tables will live in pgStatLocalContext.
***************
*** 3052,3058 ****
* return zero for anything and the collector simply starts from scratch
* with empty counters.
*/
! if ((fpin = AllocateFile(PGSTAT_STAT_FILENAME, PG_BINARY_R)) == NULL)
return dbhash;

/*
--- 3063,3069 ----
* return zero for anything and the collector simply starts from scratch
* with empty counters.
*/
! if ((fpin = AllocateFile(statfile, PG_BINARY_R)) == NULL)
return dbhash;

/*
***************
*** 3241,3246 ****
--- 3252,3263 ----
done:
FreeFile(fpin);

+ if (permanent)
+ {
+ unlink(PGSTAT_STAT_PERMANENT_FILENAME);
+ unlink(PGSTAT_STAT_FILENAME);
+ }
+
return dbhash;
}

***************
*** 3259,3267 ****

/* Autovacuum launcher wants stats about all databases */
if (IsAutoVacuumLauncherProcess())
! pgStatDBHash = pgstat_read_statsfile(InvalidOid);
else
! pgStatDBHash = pgstat_read_statsfile(MyDatabaseId);
}


--- 3276,3284 ----

/* Autovacuum launcher wants stats about all databases */
if (IsAutoVacuumLauncherProcess())
! pgStatDBHash = pgstat_read_statsfile(InvalidOid, false);
else
! pgStatDBHash = pgstat_read_statsfile(MyDatabaseId, false);
}


Index: bin/initdb/initdb.c
===================================================================
RCS file: /cvsroot/pgsql/src/bin/initdb/initdb.c,v
retrieving revision 1.158
diff -c -r1.158 initdb.c
*** bin/initdb/initdb.c 19 Jul 2008 04:01:29 -0000 1.158
--- bin/initdb/initdb.c 4 Aug 2008 09:39:23 -0000
***************
*** 2461,2467 ****
"pg_multixact/offsets",
"base",
"base/1",
! "pg_tblspc"
};

progname = get_progname(argv[0]);
--- 2461,2468 ----
"pg_multixact/offsets",
"base",
"base/1",
! "pg_tblspc",
! "pgstat_tmp"
};

progname = get_progname(argv[0]);
Tom Lane wrote:
> Magnus Hagander <magnus@hagander.net> writes:
>> Tom Lane wrote:
>>> It doesn't seem to me that it'd be hard to support two locations for the
>>> stats file --- it'd just take another parameter to the read and write
>>> routines. pgstat.c already knows the difference between a normal write
>>> and a shutdown write ...
>
>> Right. Should it be removed from the permanent location when the server
>> starts?
>
> Yes, I would say so. There are two possible exit paths: normal shutdown
> (where we'd write a new file) and crash. In a crash we'd wish to delete
> the file anyway for fear that it's corrupted.
>
> Startup: read permanent file, then delete it.
>
> Post-crash: remove any permanent file (same as now)
>
> Shutdown: write permanent file.
>
> Normal stats collector write: write temp file.
>
> Backend stats fetch: read temp file.

Attached is a patch that implements this. I went with the option of just
storing it in a temporary directory that can be symlinked, and not
bothering with a GUC for it. Comments? (documentation updates are also
needed, but I'll wait with those until I hear patch comments :-P)


//Magnus

[SQL] return setof record - strange behavior

The function behaves as expected when in plain SQL, only plpgsql
function has the above mentioned problem.

regards
mk

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [pgadmin-hackers] pgScript patch

Hi Mickael

On Sun, Jul 27, 2008 at 3:34 PM, Mickael Deloison <mdeloison@gmail.com> wrote:
> Hi pgadmin hackers,
>
> pgScript can now be integrated into pgAdmin3. I have made a patch on
> revision 7394 of pgAdmin. This patch is big therefore I do not post it
> in this email, it is instead available on the following server:
> http://pgscript.projects.postgresql.org/pgadmin

Cool. It is indeed a huge patch, so I'll have to leave it to your
mentor to undertake a more extensive code review, but here are a few
points I noticed whilst testing on Mac:

- Please include "Copyright (C) 2002 - 2008, The pgAdmin Development
Team" in the copyright notices at the top of each source file. I'm
happy for you to include your own copyright there also, but it makes
things much easier from a legal POV if you list us as well.

- The build failed initially with:

./pgscript/statements/pgsStmtList.cpp: In member function 'virtual
void pgsStmtList::eval(pgsVarMap&) const':
./pgscript/statements/pgsStmtList.cpp:56: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp:56: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp:57: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp:57: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp: In member function 'virtual
void pgsStmtList::eval(pgsVarMap&) const':
./pgscript/statements/pgsStmtList.cpp:56: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp:56: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp:57: error: cannot use typeid
with -fno-rtti
./pgscript/statements/pgsStmtList.cpp:57: error: cannot use typeid
with -fno-rtti
lipo: can't figure out the architecture type of:
/var/folders/uk/ukdzizfJHxe07gKAk8a+NE+++TI/-Tmp-//ccIbnqxz.out
make[2]: *** [pgsStmtList.o] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

After removing -fno-rtti from acinclude.m4:

- I see the following warnings:

./pgadmin/include/frm/frmQuery.h: In constructor
'frmQuery::frmQuery(frmMain*, const wxString&, pgConn*, const
wxString&, const wxString&)':
../pgadmin/include/frm/frmQuery.h:75: warning: 'frmQuery::pgscript'
will be initialized after
../pgadmin/include/frm/frmQuery.h:71: warning: 'wxTimer frmQuery::timer'
./frm/frmQuery.cpp:130: warning: when initialized here
../pgadmin/include/frm/frmQuery.h: In constructor
'frmQuery::frmQuery(frmMain*, const wxString&, pgConn*, const
wxString&, const wxString&)':
../pgadmin/include/frm/frmQuery.h:75: warning: 'frmQuery::pgscript'
will be initialized after
../pgadmin/include/frm/frmQuery.h:71: warning: 'wxTimer frmQuery::timer'
./frm/frmQuery.cpp:130: warning: when initialized here

- The following script crashes (yes, I realise it's missing a cast)

declare @i, @t;

set @i = 0;

while @i < 20
begin
set @t = 'aa' + @i;
create table @t (id serial primary key, data text);

set @i = @i + 1;
end

- The corrected script gives no feedback that it's finished, other
than re-enabling buttons. I would expect to see the appropriate
notices from the server about each table that is created, and the
status message on the status bar should change.

- The following script (with missing increment of @i) gave appropriate
errors when run the first time, but ran silently the second:

declare @i, @t;

set @i = 0;

while @i < 20
begin
set @t = 'aa' + cast(@i as string);
create table @t (id serial primary key, data text);
end

- Cancelling that script the first time round is awkward (we should
offer a cancel option on the error dialogue). Using the Stop button
seems to crash.

I'll leave it at that for now, and look forward to the next patch :-)

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [HACKERS] Mini improvement: statement_cost_limit

"Josh Berkus" <josh@agliodbs.com> writes:

> Tom,
>
>> Wasn't this exact proposal discussed and rejected awhile back?
>
> We rejected Greenplum's much more invasive resource manager, because it
> created a large performance penalty on small queries whether or not it was
> turned on. However, I don't remember any rejection of an idea as simple
> as a cost limit rejection.

The idea's certainly come up before. It probably received the usual
non-committal cold shoulder rather than an outright "rejection".

> This would, IMHO, be very useful for production instances of PostgreSQL.
> The penalty for mis-rejection of a poorly costed query is much lower than
> the penalty for having a bad query eat all your CPU.

Well that's going to depend on the application.... But I suppose there's
nothing wrong with having options which aren't always a good idea to use. The
real question I guess is whether there's ever a situation where it would be a
good idea to use this. I'm not 100% sure.

What I would probably use myself is an option to print a warning before
starting the query. That would be handy for interactive sessions so you would
be able to hit C-c instead of waiting for several minutes and then wondering
whether you got the query wrong.

I wonder if it would be useful to have a flag on some GUC options to make them
not globally settable. That is, for example, you could set enable_seqscan in
an individual session but not in postgres.conf. Or perhaps again just print a
warning that it's not recommended as a global configuration.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[SQL] return setof record - strange behavior

Hi everybody. Can anyone enlighten me what's wrong with this function :

CREATE OR REPLACE FUNCTION month_year(mon integer, intv integer, OUT
ro integer, OUT mi integer)
RETURNS SETOF record AS
$BODY$
DECLARE
w record;
cy integer := EXTRACT (YEAR FROM current_date);

BEGIN

FOR w IN
SELECT (CASE WHEN m > 12 THEN cy + 1 ELSE cy END)::integer, (CASE
WHEN m > 12 THEN m - 12 ELSE m END)::integer
FROM generate_series(mon + 1, mon + intv) AS m
LOOP
RETURN next;
END LOOP;

END;

$BODY$
LANGUAGE 'plpgsql' VOLATILE;


SELECT * FROM month_year(10, 5);

Why does it return empty SET ? The amount of rows is correct though ....
I'm running 8.1.4

regards
mk

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[GENERAL] Efficient data structures and UI for product matrix

Hi!

We wish to provide our users with a simple-to-use web-based processor-selection tool, where a user could select a couple of attribute values and be presented with a list of matching processors. The basis of the required data would be provided by our editors as Excel documents of the following structure:

attribute_1 attribute_2 ...
processor_a some_value some_value ...
processor_b some_value some_value
...

This data would be normalized to the following structure on import:

CREATE TABLE processors
(
id serial NOT NULL,
processor_name text NOT NULL,
CONSTRAINT "processors_pkey" PRIMARY KEY (id)
)WITHOUT OIDS;

CREATE TABLE attributes
(
id serial NOT NULL,
attribute_name text NOT NULL,
CONSTRAINT "attributes_pkey" PRIMARY KEY (id)
)WITHOUT OIDS;

CREATE TABLE processor_attributes
(
processor_id integer NOT NULL,
attribute_id integer NOT NULL,
value_id integer NOT NULL,
CONSTRAINT "pk_processor_attributes" PRIMARY KEY (processor_id, attribute_id, value_id),
CONSTRAINT "fk_processor_id" FOREIGN KEY (processor_id) REFERENCES processors(id) ON UPDATE CASCADE ON DELETE CASCADE,
CONSTRAINT "fk_attribute_id" FOREIGN KEY (attribute_id) REFERENCES attributes(id) ON UPDATE CASCADE ON DELETE CASCADE,
CONSTRAINT "fk_value_id" FOREIGN KEY (value_id) REFERENCES attribute_values(id)
)WITHOUT OIDS;

CREATE TABLE attribute_values
(
id serial NOT NULL,
value text,
attribute_id integer NOT NULL,
CONSTRAINT "attribute_values_pkey" PRIMARY KEY (id),
CONSTRAINT "fk_attribute_id" FOREIGN KEY (attribute_id) REFERENCES attributes(id) ON UPDATE CASCADE ON DELETE CASCADE
)WITHOUT OIDS;

The (web-based) UI should provide a dropdown field for each attribute (none selected per default) and a pageable table with the matching results underneath. The user should be kept from having to find out that there's no match for a selected combination of attribute-values, so after each selected dropdown, the as yet unselected dropdown-lists must be filtered to show only the still available attribute values - we intend to use some AJAX functions here. It'd be nice if the UI could be made fully dynamic, that's to say that it should reflect any changes to the number and names of attributes or their available values without any change to the application's code; the latter is in fact a must have, whereas the number and names of attributes would not change quite as frequently, so moderate changes to the code would be alright.

Now, has anyone done anything similar recently and could provide some insight? I'd be particularly interested in any solutions involving some sort of de-normalization, views, procedures and suchlike to speed up performance of the drop-down-update process, especially as the number of attributes and the number of legal values for each attribute increases. Does anybody know of some sort of example application for this type of problem where we could find to inspiration?

Kind regards

Markus


Computec Media AG
Sitz der Gesellschaft und Registergericht: Fürth (HRB 8818)
Vorstandsmitglieder: Johannes S. Gözalan (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jürg Marquard
Umsatzsteuer-Identifikationsnummer: DE 812 575 276

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [planet] Adding blog to planetpostgresql.org

Hi Bernd,

Sorry for the delay -- you just landed Planet :)

Cheers, Devrim

On Mon, 2008-07-21 at 13:00 +0200, Bernd Helmle wrote:
> Devrim,
>
> I would like to repeat my request for adding my blog located at
>
> http://psoos.blogspot.com/search/label/PostgreSQL
>
> to planetpostgresql.org. Let me know if there's something missing.
>
> Thanks.
>
--
Devrim GÜNDÜZ
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [Fwd: [planet] Add blog feed to planetpostgresql.org]

> Can you help add my blog feed from http://blogs.sun.com/robertlor/ to
> planetpostgresql.org?

Hi Robert,

For some reason, I did not get the e-mail above. Magnus forwarded it to
me -- and you are on Planet now.

I also added you to planet-subscribers@lists.planetpostgresql.org

Cheers,

--
Devrim GÜNDÜZ , RHCE
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/

Re: [HACKERS] unnecessary code in_bt_split

Tom Lane napsal(a):
> Zdenek Kotala <Zdenek.Kotala@Sun.COM> writes:
>> I found that _bt_split function calls PageGetTempPage, but next call is
>> _bt_page_init which clear all contents anyway. Is there any reason to call
>> PageGetTempPage instead of palloc?
>
> Not violating a perfectly good abstraction?

OK. Abstraction is nice, but what I see in the PageGetTempPage It is more like
code which makes everything but usability is zero. It is used only in two places
and in both it is used for different purpose. _bt_split() needs only allocate
empty temp page and gistplacetopage() .


By my opinion It would be better to have three functions:

PageCreateTempPage - only allocate memory and call pageinit
PageCloneSpecial - copy special section from source page
PageRestoreTempPage - no change.


Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] bytea encode performance issues

On 2008-08-03 12:12, Sim Zacks wrote:

> SELECT m.message_idnr,k.messageblk
> FROM dbmail_messageblks k
> JOIN dbmail_physmessage p ON k.physmessage_id = p.id
> JOIN dbmail_messages m ON p.id = m.physmessage_id
> WHERE
> mailbox_idnr = 8
> AND status IN (0,1 )
> AND k.is_header = '0'
> GROUP BY m.message_idnr,k.messageblk
> HAVING ENCODE(k.messageblk::bytea,'escape') LIKE '%John%'

What is this encode() for? I think it is not needed and kills
performance, as it needs to copy every message body in memory, possibly
several times.

Why not just "HAVING k.messageblk LIKE '%John%'"?


Try this:

=> \timing

=> create temporary table test as
select
decode(
repeat(
'lorem ipsum dolor sit amet '
||s::text||E'\n'
,1000
),
'escape'
) as a
from generate_series(1,10000) as s;
SELECT
Time: 10063.807 ms

=> select count(*) from test where a like '%John%';
count
-------
0
(1 row)

Time: 1280.973 ms

=> select count(*) from test where encode(a,'escape') like '%John%';
count
-------
0
(1 row)

Time: 5690.097 ms


Without encode search is 5 times faster. And for bigger bytea a
difference is even worse.


Even better:

=> select count(*) from test where position('John' in a) != 0;
select count(*) from test where position('John' in a) != 0;
count
-------
0
(1 row)

Time: 1098.768 ms

Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
Winnie the Pooh

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] postgres-r patch: autoconf/make problem

hi,

I am trying to compile the postgres-r patch, but ran into problems.. Probably just a simple lack of understanding of the make system. Any help is appreciated.

I got the CVS head for postgres on Jul-31 and applying the Jul-31 patch from here: http://www.postgres-r.org/downloads/. The patch applies fine; no problems. When running autoconf it gives me the following warning when running "./configure --enable-replication" once the configure scripts has ran:

$ ./configure --enable-replication
...
configure: WARNING: option ignored: --enable-replication
$

Subsequent compilation by simply typing "make" seems *not* to compile anything in "src/backend/replication" (teh compilation as such goes through). When going directly to this directory and typing "make", a few compilation errors appear for the file "local.c". I attached the output at the end of this e-mail. But they might simply be caused by some compilation flags not correctly set due to earlier problems.

I' sure it's just a simple problem me not specifying some command line option (compilation host is RHEL5). So in hope of a simple answer, this question: Am I missing some compilation options?

Markus

PS: the configure output:
$ ./configure --enable-replication
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking which template to use... linux
checking whether to build with 64-bit integer date/time support... yes
checking whether NLS is wanted... no
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking if gcc supports -Wdeclaration-after-statement... yes
checking if gcc supports -Wendif-labels.... yes
checking if gcc supports -fno-strict-aliasing... yes
checking if gcc supports -fwrapv... yes
checking whether the C compiler still works.... yes
checking how to run the C preprocessor... gcc -E
checking allow thread-safe client libraries... no
checking whether to build with Tcl... no
checking whether to build Perl modules... no
checking whether to build Python modules... no
checking whether to build with GSSAPI support... no
checking whether to build with Kerberos 5 support... no
checking whether to build with PAM support... no
checking whether to build with LDAP support... no
checking whether to build with Bonjour support... no
checking whether to build with OpenSSL support... no
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ld used by GCC... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for ranlib... ranlib
checking for strip... strip
checking whether it is possible to strip libraries... yes
checking for tar... /bin/tar
checking whether ln -s works... yes
checking for gawk... gawk
checking for bison... bison -y
configure: using bison (GNU Bison) 2.3
checking for flex... /usr/bin/flex
configure: using /usr/bin/flex version 2.5.4
checking for perl... /usr/bin/perl
checking for main in -lm... yes
checking for library containing setproctitle.... no
checking for library containing dlopen... -ldl
checking for library containing socket... none required
checking for library containing shl_load... no
checking for library containing getopt_long... none required
checking for library containing crypt... -lcrypt
checking for library containing fdatasync... none required
checking for library containing shmget... none required
checking for -lreadline... yes (-lreadline -ltermcap)
checking for inflate in -lz... yes
checking for ANSI C header files.... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h.... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking crypt.h usability... yes
checking crypt.h presence... yes
checking for crypt.h... yes
checking dld.h usability... no
checking dld.h presence... no
checking for dld.h... no
checking fp_class.h usability... no
checking fp_class.h presence... no
checking for fp_class.h... no
checking getopt.h usability... yes
checking getopt.h presence... yes
checking for getopt.h... yes
checking ieeefp.h usability... no
checking ieeefp.h presence... no
checking for ieeefp.h... no
checking langinfo.h usability... yes
checking langinfo.h presence... yes
checking for langinfo..h... yes
checking poll.h usability... yes
checking poll.h presence... yes
checking for poll.h... yes
checking pwd.h usability... yes
checking pwd.h presence... yes
checking for pwd.h... yes
checking sys/ipc.h usability... yes
checking sys/ipc.h presence... yes
checking for sys/ipc..h... yes
checking sys/poll.h usability... yes
checking sys/poll.h presence... yes
checking for sys/poll.h... yes
checking sys/pstat.h usability... no
checking sys/pstat.h presence... no
checking for sys/pstat.h.... no
checking sys/resource.h usability... yes
checking sys/resource.h presence... yes
checking for sys/resource.h... yes
checking sys/select.h usability... yes
checking sys/select.h presence... yes
checking for sys/select.h... yes
checking sys/sem.h usability... yes
checking sys/sem.h presence... yes
checking for sys/sem.h... yes
checking sys/socket.h usability... yes
checking sys/socket.h presence... yes
checking for sys/socket.h... yes
checking sys/shm.h usability... yes
checking sys/shm.h presence... yes
checking for sys/shm.h... yes
checking sys/tas.h usability... no
checking sys/tas.h presence... no
checking for sys/tas.h... no
checking sys/time.h usability... yes
checking sys/time.h presence... yes
checking for sys/time.h... yes
checking sys/un.h usability... yes
checking sys/un.h presence... yes
checking for sys/un.h... yes
checking termios.h usability... yes
checking termios.h presence... yes
checking for termios.h... yes
checking utime.h usability... yes
checking utime.h presence... yes
checking for utime.h... yes
checking wchar.h usability... yes
checking wchar.h presence... yes
checking for wchar.h... yes
checking wctype.h usability... yes
checking wctype.h presence... yes
checking for wctype.h... yes
checking kernel/OS.h usability... no
checking kernel/OS.h presence... no
checking for kernel/OS.h... no
checking kernel/image.h usability... no
checking kernel/image.h presence... no
checking for kernel/image.h... no
checking SupportDefs.h usability... no
checking SupportDefs.h presence... no
checking for SupportDefs.h... no
checking netinet/in.h usability... yes
checking netinet/in.h presence... yes
checking for netinet/in.h... yes
checking for netinet/tcp.h... yes
checking readline/readline.h usability... yes
checking readline/readline.h presence... yes
checking for readline/readline.h... yes
checking readline/history.h usability... yes
checking readline/history.h presence... yes
checking for readline/history.h... yes
checking zlib.h usability... yes
checking zlib.h presence... yes
checking for zlib.h... yes
checking whether byte ordering is bigendian... no
checking for an ANSI C-conforming const... yes
checking for inline... inline
checking for preprocessor stringizing operator... yes
checking for signed types... yes
checking for working volatile... yes
checking for __func__... yes
checking whether struct tm is in sys/time.h or time.h... time.h
checking for struct tm.tm_zone... yes
checking for tzname... yes
checking for union semun... no
checking for struct sockaddr_un... yes
checking for struct sockaddr_storage... yes
checking for struct sockaddr_storage.ss_family.... yes
checking for struct sockaddr_storage.__ss_family... no
checking for struct sockaddr_storage.ss_len... no
checking for struct sockaddr_storage.__ss_len... no
checking for struct sockaddr.sa_len... no
checking for struct addrinfo... yes
checking for struct cmsgcred... no
checking for struct fcred... no
checking for struct sockcred... no
checking for struct option... yes
checking for z_streamp... yes
checking for int timezone... yes
checking types of arguments for accept()... int, int, struct sockaddr *, size_t *
checking whether gettimeofday takes only one argument... no
checking for cbrt... yes
checking for dlopen... yes
checking for fcvt... yes
checking for fdatasync... yes
checking for getpeereid.... no
checking for getrlimit... yes
checking for memmove... yes
checking for poll... yes
checking for pstat... no
checking for readlink... yes
checking for setproctitle... no
checking for setsid... yes
checking for sigprocmask... yes
checking for symlink... yes
checking for sysconf... yes
checking for towlower... yes
checking for utime... yes
checking for utimes... yes
checking for waitpid... yes
checking for wcstombs... yes
checking whether fdatasync is declared... yes
checking whether posix_fadvise is declared... yes
checking whether strlcat is declared... no
checking whether strlcpy is declared... no
checking whether F_FULLFSYNC is declared... no
checking for struct sockaddr_in6... yes
checking for PS_STRINGS... no
checking for snprintf... yes
checking for vsnprintf... yes
checking whether snprintf is declared... yes
checking whether vsnprintf is declared... yes
checking for isinf... yes
checking for crypt... yes
checking for getopt... yes
checking for getrusage... yes
checking for inet_aton... yes
checking for random... yes
checking for rint... yes
checking for srandom... yes
checking for strdup... yes
checking for strerror... yes
checking for strlcat... no
checking for strlcpy... no
checking for strtol... yes
checking for strtoul... yes
checking for unsetenv... yes
checking for getaddrinfo... yes
checking for getopt_long... yes
checking for rl_completion_append_character... yes
checking for rl_completion_matches... yes
checking for rl_filename_completion_function... yes
checking for replace_history_entry... yes
checking for sigsetjmp... yes
checking whether sys_siglist is declared... yes
checking for syslog... yes
checking syslog.h usability... yes
checking syslog.h presence... yes
checking for syslog.h... yes
checking for optreset.... no
checking for strtoll... yes
checking for strtoull... yes
checking for atexit... yes
checking for fseeko... yes
checking for _LARGEFILE_SOURCE value needed for large files... no
checking test program... ok
checking whether long int is 64 bits... no
checking whether long long int is 64 bits... yes
checking snprintf format for long long int... %lld
checking for unsigned long... yes
checking size of unsigned long... 4
checking for size_t... yes
checking size of size_t... 4
checking whether to build with float4 passed by value... yes
checking whether to build with float8 passed by value... no
checking for short... yes
checking alignment of short... 2
checking for int... yes
checking alignment of int... 4
checking for long... yes
checking alignment of long... 4
checking for long long int... yes
checking alignment of long long int... 4
checking for double... yes
checking alignment of double... 4
checking for int8... no
checking for uint8... no
checking for int64... no
checking for uint64... no
checking for sig_atomic_t... yes
checking for POSIX signal interface... yes
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... 64
checking for off_t... yes
checking size of off_t... 8
checking for working memcmp... yes
checking for onsgmls... onsgmls
checking for openjade... openjade
checking for DocBook V4.2... yes
checking for DocBook stylesheets... /usr/share/sgml/docbook/dsssl-stylesheets
checking for collateindex.pl... /usr/bin/collateindex.pl
checking for sgmlspl... sgmlspl
checking if gcc supports -Wl,--as-needed... no
configure: using CFLAGS=-O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
configure: using CPPFLAGS= -D_GNU_SOURCE
configure: using LDFLAGS=
configure: creating ./config.status
config.status: creating GNUmakefile
config.status: creating src/Makefile.global
config.status: creating src/include/pg_config.h
config.status: creating src/interfaces/ecpg/include/ecpg_config.h
config.status: linking ./src/backend/port/tas/dummy.s to src/backend/port/tas.s
config.status: linking ./src/backend/port/dynloader/linux.c to src/backend/port/dynloader.c
config.status: linking ./src/backend/port/sysv_sema.c to src/backend/port/pg_sema.c
config.status: linking ./src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c
config.status: linking ./src/backend/port/dynloader/linux.h to src/include/dynloader.h
config.status: linking ./src/include/port/linux.h to src/include/pg_config_os.h
config.status: linking ./src/makefiles/Makefile.linux to src/Makefile.port
configure: WARNING: option ignored: --enable-replication
$

PS2:
$ pwd
/home/ml/pg/pgsql/src/backend/replication
$ make
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I../../.../src/include -D_GNU_SOURCE -c -o gc_utils.o gc_utils.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I../../../src/include -D_GNU_SOURCE -c -o gc_egcs.o gc_egcs.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I../../../src/include -D_GNU_SOURCE -c -o gc_ensemble.o gc_ensemble.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I../../../src/include -D_GNU_SOURCE -c -o gc_spread.o gc_spread.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I../../../src/include -D_GNU_SOURCE -c -o cset.o cset.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I../../../src/include -D_GNU_SOURCE -c -o local.o local.c
local.c: In function 'send_backend_ready_msg':
local.c:66: error: 'ReplicationManagerPid' undeclared (first use in this function)
local.c:66: error: (Each undeclared identifier is reported only once
local.c:66: error: for each function it appears in.)
local.c: In function 'send_cset':
local.c:101: error: 'ReplicationManagerPid' undeclared (first use in this function)
local.c: In function 'StartupReplication':
local.c:198: error: 'ReplicationManagerPid' undeclared (first use in this function)
local.c: In function 'replication_request_sequence_increment':
local.c:281: error: 'ReplicationManagerPid' undeclared (first use in this function)
local.c: In function 'cset_replicate':
local.c:409: error: 'PGPROC' has no member named 'abortFlag'
local.c:416: error: 'PGPROC' has no member named 'abortFlag'
local.c:444: error: 'PGPROC' has no member named 'abortFlag'
local.c:448: error: 'PGPROC' has no member named 'abortFlag'
local.c:482: error: 'PGPROC' has no member named 'abortFlag'
make: *** [local.o] Error 1
$

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgbouncer - pgbouncer: proper log message for console client cancel

Log Message:
-----------
proper log message for console client cancel

Modified Files:
--------------
pgbouncer/src:
objects.c (r1.48 -> r1.49)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/objects.c.diff?r1=1.48&r2=1.49)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgbouncer - pgbouncer: cleaner socket_row()

Log Message:
-----------
cleaner socket_row()

Modified Files:
--------------
pgbouncer/src:
admin.c (r1.36 -> r1.37)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/admin.c.diff?r1=1.36&r2=1.37)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgtcl - libpgtcl: Add clock_to_precise_sql_time

Log Message:
-----------
Add clock_to_precise_sql_time

Modified Files:
--------------
libpgtcl/playpen/pghelpers:
postgres-helpers.README (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgtcl/libpgtcl/playpen/pghelpers/postgres-helpers.README.diff?r1=1.1&r2=1.2)
postgres-helpers.tcl (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgtcl/libpgtcl/playpen/pghelpers/postgres-helpers.tcl.diff?r1=1.1&r2=1.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgadmin-hackers] First public pre-alpha release of GQB (Graphical Query Builder) for pgAdmin

Hi Luis

On Sun, Jul 27, 2008 at 6:13 AM, Luis Ochoa <ziul1979@gmail.com> wrote:
>
> Where is located the patch?
> http://svn.assembla.com/svn/vsqlbuilder/Jul/27/prealpha-test-gqb-july-27.patch

I'm just going to list all the issues I found here so you can work
through them easily. I'm testing on a Mac today, and feeling
particularly pedantic :-p.

- The patch failed to apply frmQuery.h. During manual application, I
found that you have app headers inbetween wx headers (wx headers
should always be first), and a wx header quoted with " " instead of <
>.

- Some compilation warnings:

./frm/frmQuery.cpp: In constructor 'frmQuery::frmQuery(frmMain*, const
wxString&, pgConn*, const wxString&, const wxString&)':
./frm/frmQuery.cpp:368: warning: unused variable 'view'
./frm/frmQuery.cpp: In constructor 'frmQuery::frmQuery(frmMain*, const
wxString&, pgConn*, const wxString&, const wxString&)':
./frm/frmQuery.cpp:368: warning: unused variable 'view'
./frm/frmQuery.cpp: In member function 'void
frmQuery::OnChangeConnection(wxCommandEvent&)':
./frm/frmQuery.cpp:881: warning: cannot pass objects of non-POD type
'class wxString' through '...'; call will abort at runtime
./frm/frmQuery.cpp: In member function 'void
frmQuery::OnTest3(wxNotebookEvent&)':
./frm/frmQuery.cpp:1027: warning: cannot pass objects of non-POD type
'class wxString' through '...'; call will abort at runtime
./frm/frmQuery.cpp: In member function 'void
frmQuery::OnChangeConnection(wxCommandEvent&)':
./frm/frmQuery.cpp:881: warning: cannot pass objects of non-POD type
'class wxString' through '...'; call will abort at runtime
./frm/frmQuery.cpp: In member function 'void
frmQuery::OnTest3(wxNotebookEvent&)':
./frm/frmQuery.cpp:1027: warning: cannot pass objects of non-POD type
'class wxString' through '...'; call will abort at runtime

(The last four *will* cause crashes - normally you just add .c_str()
to any wxString arguments passed as arguments to variadic functions).

<hits brick wall>

Following that, I ran into the same errors as Guillaume. Please supply
an updated patch, that uses only wxWidgets controls (using GTK classes
definitely isn't going to work on Mac).

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers