Friday, August 1, 2008

Re: [HACKERS] Re: [Pljava-dev] Should creating a new base type require superuser status?

Tom Lane wrote:
> Thomas Hallgren <thomas@tada.se> writes:
>
>> Tom, could you please elaborate where you see a security hole?
>>
>
> The problem that we've seen in the past shows up when the user lies in
> the CREATE TYPE command, specifying type representation properties that
> are different from what the underlying functions expect. In particular,
> if it's possible to pass a pass-by-value integer to a function
> that's expecting a pass-by-reference datum, you can misuse the function
> to access backend memory.
>
>
This is a non-issue in PL/Java. An integer parameter is never passed by
reference and there's no way the PL/Java user can get direct access to
backend memory.

> I gather from looking at the example that Kris referenced that there's
> some interface code in between the SQL function call and the user's Java
> code, and that that interface code is itself looking at the declared
> properties of the SQL type to decide what to do. So to the extent that
> that code is (a) bulletproof against inconsistencies and (b) not
> subvertible by the PL/Java user, it might be that there's no hole in
> practice. But assumption (b) seems pretty fragile to me.
>
>
I think that assumption is without ground. Java doesn't permit you to
access memory unless you use Java classes (java.nio stuff) that is
explicitly designed to do that and you need native code to set such
things up. A PL/Java user can not do that unless he is able to link in
other shared objects or dll's to the backend process.

Based on that, I claim that your statement about a "security hole a mile
wide" is incorrect. PL/Java is not subject to issues relating to misuse
of backend memory.

Regards,
Thomas Hallgren


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] How to remove duplicate lines but save one of the lines?

A B wrote:
> I have a table with rows like this
> A 1
> A 1
> B 3
> B 3
> C 44
> C 44
> and so on.
>
> and I want it to be
> A 1
> B 3
> C 44
>
> so how can I remove the all the duplicate lines but one?
>
>
Try with:

your table structure for example: create table yourtable(campo1 char,
num integer);

select * from yourtable;

sicodelico=# select * from yourtable ;
campo1 | num
--------+-----
A | 1
A | 1
B | 3
B | 3
C | 44
C | 44
(6 filas)

sicodelico=#


1) create temp sequence foo_id_seq start with 1;

2) alter table yourtable add column id integer;

3) update yourtable set id = nextval('foo_id_seq');

look this:

sicodelico=# select * from yourtable ;
campo1 | num | id
--------+-----+----
A | 1 | 1
A | 1 | 2
B | 3 | 3
B | 3 | 4
C | 44 | 5
C | 44 | 6
(6 filas)


4) delete from yourtable where campo1 in (select y.campo1 from yourtable
y where yourtable.id > y.id);

sicodelico=# select * from yourtable;
campo1 | num | id
--------+-----+----
A | 1 | 1
B | 3 | 3
C | 44 | 5
(3 filas)

5) alter table yourtable drop column id;

sicodelico=# select * from yourtable;
campo1 | num
--------+-----
A | 1
B | 3
C | 44
(3 filas)


have a lot of fun :)

--
Regards,

Julio Cesar Sánchez González.

--
Ahora me he convertido en la muerte, destructora de mundos.
Soy la Muerte que se lleva todo, la fuente de las cosas que vendran.

www.sistemasyconectividad.com.mx http://darkavngr.blogspot.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] why so many error when I load the data to database from a script which generated by pg_dump.

hi, all:
I have a database to store the information about the html source of the
web page.
I wan't to move the data to another database, so I pg_dump the data to a
file:
/usr/local/pgsql/bin/pg_dump htmldb -Upostgres -p 5433 > /tmp/dump.sql
now, I load the data into new database:
/usr/local/pgsql/bin/psql -d newbd -Upostgres -p 5432 -f /tmp/dump.sql

but, I got some error about the "column error", for example:
----
psql:/export/chry.sql:1965134: ERROR: missing data for column "content"
CONTEXT: COPY htmlcontent, line 312807: "1207327 <!DOCTYPE HTML PUBLIC
"-//W3C//DTD
HTML 4.0 Transitional//EN">\n<html>\n<head>\n<meta http-e..."
----
I don't konw that why I can store it in a database, but I can't dump it
and reload into another one.

how to fix it?

my postgresql version: 8.3.3

thanks,

regards,


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ODBC] compiling dlg_specific.c file on Sun Solaris 5.8

Marcin Ligorowski wrote:
> I've detected problem with compiling dlg_specific.c file on Sun Solaris 5.8 using compiler 'cc: Sun C 5.8 2005/10/13'.
> To resolve this problem I've changed line 896 to
> qlog(" conn_settings='%s',conn_encoding='%s'\n",
> ci->conn_settings, enc ? enc : (UCHAR*)"(null)");
>
> I've added cast to (UCHAR*), because compiler can not automatically cast to correct type.

OK I would take care of it.
Thanks.

regards,
Hiroshi Inoue

--
Sent via pgsql-odbc mailing list (pgsql-odbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-odbc

Re: [NOVICE] OIDS question

Frank Bax <fbax@sympatico.ca> writes:
> While playing with this I noticed that using the -c -o options of
> pg_dump together; I get OIDS in COPY statements, but not in the CREATE
> commands. Either this is a bug; or I'm missing something here...

Please provide some details ... AFAICS the COPY and CREATE TABLE cases
are testing the same conditions.

regards, tom lane

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [JDBC] Encoding issues

Oliver Jowett <oliver@opencloud.com> writes:
> Jeff Davis wrote:
>> or at least blindly pass the bytes on to the server?

> What bytes? You have a bunch of UTF-16 characters (possibly with
> surrogate pairs etc). What encoding do you use to turn that into a
> bytestream?

It wouldn't be entirely unreasonable to define the answer as "UTF-8".
That would at least provide serviceable behavior to a goodly group of
users, whereas the current implementation seems guaranteed to fail
for everyone (other than us ASCII-only Neanderthals who don't care
anyway...)

regards, tom lane

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [HACKERS] WITH RECUSIVE patches 0723

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:
> One more oversight: the patch isn't updating the ECPG preproc.y other
> than trivially, so WITH queries aren't working in ecpg:

> test.pgc:111: ERROR: syntax error at or near "recursive"

By and large we don't expect core patches to worry about fixing the ecpg
grammar. Right now the situation is that Michael Meskes makes a manual
cleanup pass every so often :-(. I would like to see that get automated
sooner rather than later, but in the near term it is not the
responsibility of core-patch authors to update ecpg.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] [HACKERS] Hint Bits and Write I/O

Alvaro Herrera <alvherre@commandprompt.com> writes:
> I think it makes sense to commit this patch now, per previous
> discussions on which we have agreed to make incremental changes.

Yeah, but at the same time there is merit in the argument that the
proposed patch hasn't actually been proven to be usable for anything.
I would be a lot happier if there were even a trivial proof-of-concept
plugin example submitted with it, just to prove that there were no
showstopper problems in the plugin design, like failure to pass
essential information or not getting the locking straight.

> I'm just wondering if the change of usage_count from 16 to 8 bits was
> discussed and agreed?

Umm ... it was not, but given that we have logic in there to limit the
usage_count to 5 or so, it's hard to argue that there's a big problem.

I confess to not having read the patch in detail --- where did the other
8 bits go to?

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [GENERAL] hibernate nativequery and uuid

oops, forgot to cc the mailing list again...

Andrew wrote:
> I know none of this relates directly to postgresql and on reflection
> is probably more appropriate for the hibernate forums. So apologies
> for having raised the topic here. Also, thanks for the suggestions
> that I have received on the topic.
>
> I have got the UNION working in a view via JPA. However, it was not
> straightforward and the approach that I have taken is not that
> efficient. So for those who are interested in some of the
> implementation details...
>
> Previously to get hibernate to use the postgresql uuid, I had to
> create a custom hibernate UserType as well as extend the JDBC
> postgresql driver. I then referenced the custom data type and a
> custom UUID generator that I had created using JPA/hibernate
> annotations in the entities.
>
> However, the hibernate validation of views appears to be different
> than with tables, so after creating the corresponding view entity, on
> restarting the application server, I was getting the "No Dialect
> mapping for JDBC type: 1111" exception, despite the view entity being
> configured just like the table entities. So I had to change my
> META-INF/persistence.xml to reference my extended JDBC postgresql
> driver, rather than the original JDBC driver. For good measure, I
> also did the same to my hibernate-console.properties file as part of
> my eclipse environment.
>
> This addressed the 1111 exception. But then I was hit with a
> "javax.persistence.PersistenceException:
> org.hibernate.HibernateException: Missing table: my_view" exception,
> as there is an outstanding defect in hibernate related to how it
> treats views. See
> http://opensource.atlassian.com/projects/hibernate/browse/HHH-1329.
> Supposedly it was fixed in version 3.2.6, but I'm using version 3.2.6
> GA and it is patently still an issue. So the only way around this at
> present is unfortunately to turn off validation with the
> hibernate.hbm2ddl.auto property in the META-INF/persistence.xml file.
>
> But after all of that, I can now use JPA entities in EJBQL's to use a
> view containing postgresql uuid's. Nothing like speaking gibberish
> with an overuse of acronyms :-)
>
> Cheers,
>
> Andy
>
>
> A.M. wrote:
>>
>> Could you use a view to hide the UNION?
>>
>> Cheers,
>> M
>>
>> No virus found in this incoming message.
>> Checked by AVG - http://www.avg.comVersion: 8.0.138 / Virus Database:
>> 270.5.8/1582 - Release Date: 7/30/2008 6:37 PM
>>
>>
>>
>
>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [JDBC] Encoding issues

Jeff Davis wrote:

> But when someone _does_ use non-ASCII database names, etc., shouldn't we
> produce some kind of useful error,

That's fair enough.

> or at least blindly pass the bytes on
> to the server?

What bytes? You have a bunch of UTF-16 characters (possibly with
surrogate pairs etc). What encoding do you use to turn that into a
bytestream?

-O

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [PATCHES] pg_dump additional options for performance

tgl@sss.pgh.pa.us (Tom Lane) writes:
> Simon Riggs <simon@2ndquadrant.com> writes:
>> I want to dump tables separately for performance reasons. There are
>> documented tests showing 100% gains using this method. There is no gain
>> adding this to pg_restore. There is a gain to be had - parallelising
>> index creation, but this patch doesn't provide parallelisation.
>
> Right, but the parallelization is going to happen sometime, and it is
> going to happen in the context of pg_restore. So I think it's pretty
> silly to argue that no one will ever want this feature to work in
> pg_restore.

"Never" is a long time, agreed.

> To extend the example I just gave to Stephen, I think a fairly probable
> scenario is where you only need to tweak some "before" object
> definitions, and then you could do
>
> pg_restore --schema-before-data whole.dump >before.sql
> edit before.sql
> psql -f before.sql target_db
> pg_restore --data-only --schema-after-data -d target_db whole.dump
>
> which (given a parallelizing pg_restore) would do all the time-consuming
> steps in a fully parallelized fashion.

Do we need to wait until a fully-parallelizing pg_restore is
implemented before adding this functionality to pg_dump?

The particular extension I'm interested in from pg_dump, here, is the
ability to dump multiple tables concurrently. I've got disk arrays
with enough I/O bandwidth that this form of parallelization does
provide a performance benefit.

The result of that will be that *many* files are generated, and I
don't imagine we want to change pg_restore to try to make it read from
multiple files concurrently.

Further, it's actually not obvious that we *necessarily* care about
parallelizing loading data. The thing that happens every day is
backups. I care rather a lot about optimizing that; we do backups
each and every day, and optimizations to that process will accrue
benefits each and every day.

In contrast, restoring databases does not take place every day. When
it happens, yes, there's considerable value to making *that* go as
quickly as possible, but I'm quite willing to consider optimizing that
to be separate from optimizing backups.

I daresay I haven't used pg_restore any time recently, either. Any
time we have thought about using it, we've concluded that the
perceivable benefits were actually more of a mirage.
--
select 'cbbrowne' || '@' || 'linuxfinances.info';
http://cbbrowne.com/info/lsf.html
Rules of the Evil Overlord #145. "My dungeon cell decor will not
feature exposed pipes. While they add to the gloomy atmosphere, they
are good conductors of vibrations and a lot of prisoners know Morse
code." <http://www.eviloverlord.com/>

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] So, what's the "base dn" in an LDAP URL again?

Tom Lane wrote:
> The fine manual claims that the "base dn" part of an LDAP URL
> is meaningful:
>
> The server will bind to the distinguished name specified as base
> dn using the user name supplied by the client. If prefix and
> suffix is specified, it will be prepended and appended to the
> user name before the bind.
>
> But looking at CheckLDAPAuth() just now, it doesn't do anything at all
> with the basedn part of the string. Seems to me this is either a code
> bug or a docs bug.

I think it's a docs bug. You don't "bind to the dn...". You bind *with*
a DN, and that one is made of out of <prefix><username><suffix>.

IIRC, my original intent was for it to bind using that and then attempt
to access the location specified by basedn, so one could set permissions
on that object. But I never did implement that - and even if I did, the
docs would still be wrong.

So, the docs should be fixed - I'll take a look at that.

It does mean that basedn isn't used, and could be removed. But we're
obviously not going to do that in a backbranch, since it'd change the
syntax. As for HEAD, I'd leave it in as well, since the changes I'm
working on for pg_hba parameters will likely make the syntax change
anyway - and there's no point in doing it twice. Seems fair?

//Magnus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] Postgresql y web services

Alguien ha intentado o tiene alguna referencia, si es que es posible,
de cómo consumir web services directamente desde postgres. Necesito
que un sp acceda a un webservice para obtener una respuesta.

Saludos, Alejandro
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

Re: [GENERAL] Postgresql not using an index

2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
> Hi Pavel,
>
> Pavel Stehule schreef:
>>
>> 2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
>>>
>>> Hi Pavel,
>>>
>>> Isn't the text for the Verbose Explain analyze not enough?
>>> Is not, how can i generate it?
>>>
>>> --
>>
>>
>> no, I am missing statistics info
>>
>> try
>> EXPLAIN ANALYZE SELECT .....
>>
>> regards
>
> I entered the command in pgsql and got the following output:
>
> # explain analyze select * from file where jobid=2792;
> QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------
> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
> time=37738.780..90453.299 rows=1 loops=1)
> Filter: (jobid = 2792)
> Total runtime: 90453.419 ms
> (3 rows)
>
> Makes any sence?


your statistics are absolutly out - planner expect 207K rows but
currently resault is one row - try to run ANALYZE statement or
increase your statistics
http://www.postgresql.org/docs/8.0/interactive/sql-altertable.html
look ALTER TABLE SET STATISTICS

regards
Pavel
>
> --
> Marc
>
>>>
>>> Marc
>>>
>>> Pavel Stehule schreef:
>>>>
>>>> Hello
>>>>
>>>> please, send EXPLAIN ANALYZE output.
>>>>
>>>> regards
>>>> Pavel Stehule
>>>>
>>>> 2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I'm using postgres 7.4 and bacula 1.38 on debian.
>>>>>
>>>>> In the bacula database there is a table named 'file' which has about
>>>>> 2.5
>>>>> million rows.
>>>>> In this table there is a field 'jobid' which is indexed.
>>>>> The index is created with the following command:
>>>>> CREATE INDEX file_jobid_idx ON file USING btree (jobid);
>>>>>
>>>>> The query:
>>>>> SELECT * from file where jobid=2792
>>>>>
>>>>> does a full scan and to my opinion it doesn't use the index.
>>>>> I already did a VACUUM ANALYZE on the database.
>>>>>
>>>>>
>>>>> Somebody an idea?
>>>>>
>>>>> EXPLAIN tells the following:
>>>>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>>>>> time=103215.145..161153.664 rows=1 loops=1)
>>>>> Filter: (jobid = 2792)
>>>>> Total runtime: 161154.734 ms
>>>>>
>>>>> The Verbose Explain tells the following:
>>>>> {SEQSCAN
>>>>> :startup_cost 0.00
>>>>> :total_cost 707683.30
>>>>> :plan_rows 207562
>>>>> :plan_width 110
>>>>> :targetlist (
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 1
>>>>> :restype 23
>>>>> :restypmod -1
>>>>> :resname fileid
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 1
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 1
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 1
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 2
>>>>> :restype 23
>>>>> :restypmod -1
>>>>> :resname fileindex
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 2
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 2
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 2
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 3
>>>>> :restype 23
>>>>> :restypmod -1
>>>>> :resname jobid
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 3
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 3
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 3
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 4
>>>>> :restype 23
>>>>> :restypmod -1
>>>>> :resname pathid
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 4
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 4
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 4
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 5
>>>>> :restype 23
>>>>> :restypmod -1
>>>>> :resname filenameid
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 5
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 5
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 5
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 6
>>>>> :restype 23
>>>>> :restypmod -1
>>>>> :resname markid
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 6
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 6
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 6
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 7
>>>>> :restype 25
>>>>> :restypmod -1
>>>>> :resname lstat
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 7
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 7
>>>>> :vartype 25
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 7
>>>>> }
>>>>> }
>>>>>
>>>>> {TARGETENTRY
>>>>> :resdom
>>>>> {RESDOM
>>>>> :resno 8
>>>>> :restype 25
>>>>> :restypmod -1
>>>>> :resname md5
>>>>> :ressortgroupref 0
>>>>> :resorigtbl 440806231
>>>>> :resorigcol 8
>>>>> :resjunk false
>>>>> }
>>>>>
>>>>> :expr
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 8
>>>>> :vartype 25
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 8
>>>>> }
>>>>> }
>>>>> )
>>>>>
>>>>> :qual (
>>>>> {OPEXPR
>>>>> :opno 96
>>>>> :opfuncid 65
>>>>> :opresulttype 16
>>>>> :opretset false
>>>>> :args (
>>>>> {VAR
>>>>> :varno 1
>>>>> :varattno 3
>>>>> :vartype 23
>>>>> :vartypmod -1
>>>>> :varlevelsup 0
>>>>> :varnoold 1
>>>>> :varoattno 3
>>>>> }
>>>>>
>>>>> {CONST
>>>>> :consttype 23
>>>>> :constlen 4
>>>>> :constbyval true
>>>>> :constisnull false
>>>>> :constvalue 4 [ -24 10 0 0 ]
>>>>> }
>>>>> )
>>>>> }
>>>>> )
>>>>>
>>>>> :lefttree <>
>>>>> :righttree <>
>>>>> :initPlan <>
>>>>> :extParam ()
>>>>>
>>>>> :allParam ()
>>>>>
>>>>> :nParamExec 0
>>>>> :scanrelid 1
>>>>> }
>>>>>
>>>>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>>>>> time=103215.145..161153.664 rows=1 loops=1)
>>>>> Filter: (jobid = 2792)
>>>>> Total runtime: 161154.734 ms
>>>>>
>>>>>
>>>>> --
>>>>> Best regards,
>>>>>
>>>>> Marc
>>>>>
>>>>> --
>>>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>>>> To make changes to your subscription:
>>>>> http://www.postgresql.org/mailpref/pgsql-general
>>>>>
>>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Postgresql not using an index

Hi Pavel,

Pavel Stehule schreef:
> 2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
>> Hi Pavel,
>>
>> Isn't the text for the Verbose Explain analyze not enough?
>> Is not, how can i generate it?
>>
>> --
>
>
> no, I am missing statistics info
>
> try
> EXPLAIN ANALYZE SELECT .....
>
> regards

I entered the command in pgsql and got the following output:

# explain analyze select * from file where jobid=2792;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
time=37738.780..90453.299 rows=1 loops=1)
Filter: (jobid = 2792)
Total runtime: 90453.419 ms
(3 rows)

Makes any sence?

--
Marc

>>
>> Marc
>>
>> Pavel Stehule schreef:
>>> Hello
>>>
>>> please, send EXPLAIN ANALYZE output.
>>>
>>> regards
>>> Pavel Stehule
>>>
>>> 2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
>>>> Hi,
>>>>
>>>> I'm using postgres 7.4 and bacula 1.38 on debian.
>>>>
>>>> In the bacula database there is a table named 'file' which has about 2.5
>>>> million rows.
>>>> In this table there is a field 'jobid' which is indexed.
>>>> The index is created with the following command:
>>>> CREATE INDEX file_jobid_idx ON file USING btree (jobid);
>>>>
>>>> The query:
>>>> SELECT * from file where jobid=2792
>>>>
>>>> does a full scan and to my opinion it doesn't use the index.
>>>> I already did a VACUUM ANALYZE on the database.
>>>>
>>>>
>>>> Somebody an idea?
>>>>
>>>> EXPLAIN tells the following:
>>>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>>>> time=103215.145..161153.664 rows=1 loops=1)
>>>> Filter: (jobid = 2792)
>>>> Total runtime: 161154.734 ms
>>>>
>>>> The Verbose Explain tells the following:
>>>> {SEQSCAN
>>>> :startup_cost 0.00
>>>> :total_cost 707683.30
>>>> :plan_rows 207562
>>>> :plan_width 110
>>>> :targetlist (
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 1
>>>> :restype 23
>>>> :restypmod -1
>>>> :resname fileid
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 1
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 1
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 1
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 2
>>>> :restype 23
>>>> :restypmod -1
>>>> :resname fileindex
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 2
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 2
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 2
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 3
>>>> :restype 23
>>>> :restypmod -1
>>>> :resname jobid
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 3
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 3
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 3
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 4
>>>> :restype 23
>>>> :restypmod -1
>>>> :resname pathid
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 4
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 4
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 4
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 5
>>>> :restype 23
>>>> :restypmod -1
>>>> :resname filenameid
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 5
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 5
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 5
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 6
>>>> :restype 23
>>>> :restypmod -1
>>>> :resname markid
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 6
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 6
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 6
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 7
>>>> :restype 25
>>>> :restypmod -1
>>>> :resname lstat
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 7
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 7
>>>> :vartype 25
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 7
>>>> }
>>>> }
>>>>
>>>> {TARGETENTRY
>>>> :resdom
>>>> {RESDOM
>>>> :resno 8
>>>> :restype 25
>>>> :restypmod -1
>>>> :resname md5
>>>> :ressortgroupref 0
>>>> :resorigtbl 440806231
>>>> :resorigcol 8
>>>> :resjunk false
>>>> }
>>>>
>>>> :expr
>>>> {VAR
>>>> :varno 1
>>>> :varattno 8
>>>> :vartype 25
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 8
>>>> }
>>>> }
>>>> )
>>>>
>>>> :qual (
>>>> {OPEXPR
>>>> :opno 96
>>>> :opfuncid 65
>>>> :opresulttype 16
>>>> :opretset false
>>>> :args (
>>>> {VAR
>>>> :varno 1
>>>> :varattno 3
>>>> :vartype 23
>>>> :vartypmod -1
>>>> :varlevelsup 0
>>>> :varnoold 1
>>>> :varoattno 3
>>>> }
>>>>
>>>> {CONST
>>>> :consttype 23
>>>> :constlen 4
>>>> :constbyval true
>>>> :constisnull false
>>>> :constvalue 4 [ -24 10 0 0 ]
>>>> }
>>>> )
>>>> }
>>>> )
>>>>
>>>> :lefttree <>
>>>> :righttree <>
>>>> :initPlan <>
>>>> :extParam ()
>>>>
>>>> :allParam ()
>>>>
>>>> :nParamExec 0
>>>> :scanrelid 1
>>>> }
>>>>
>>>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>>>> time=103215.145..161153.664 rows=1 loops=1)
>>>> Filter: (jobid = 2792)
>>>> Total runtime: 161154.734 ms
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>>
>>>> Marc
>>>>
>>>> --
>>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>>> To make changes to your subscription:
>>>> http://www.postgresql.org/mailpref/pgsql-general
>>>>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] pgsql: Rearrange the code in auth.c so that all functions for a single

Log Message:
-----------
Rearrange the code in auth.c so that all functions for a single authentication
method is grouped together in a reasonably similar way, keeping the "global
shared functions" together in their own section as well. Makes it a lot easier
to find your way around the code.

Modified Files:
--------------
pgsql/src/backend/libpq:
auth.c (r1.166 -> r1.167)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/libpq/auth.c?r1=1.166&r2=1.167)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [PATCHES] Relation forks & FSM rewrite patches

Attached is an new version of the relation forks and FSM patches,
updated per Tom's comments (details below). I think the relation forks
patch is ready for commit, except that the docs on physical storage
needs to be updated. Barring any objections, I will commit the relation
forks patch in a few days, and submit a patch for the docs.

The FSM patch has been updated so that it applied over the new relation
forks patch.

Tom Lane wrote:
> "Heikki Linnakangas" <heikki@enterprisedb.com> writes:
>> The one part that I'm not totally satisfied in the relation forks patch
>> is the smgrcreate() function. The question problem is: which piece of
>> code decides which forks to create for a relation, and when to create
>> them? I settled on the concept that all forks that a relation will need
>> are created at once, in one smgrcreate() call. There's no function to
>> create additional forks later on.
>
> I think that's an extremely bad idea. You need look no further than
> Zdenek's hopes of in-place-upgrade to see a need for adding a fork
> to an existing relation; but even without that I can see possibly
> wanting to not create a fork right away. I think smgrcreate() should
> just create one fork as it's told (ie, same API you gave mdcreate).

Yeah, that's better.

> Other nits:
>
> relpath() is now in danger of buffer overrun: you need to increase
> the palloc() sizes.

Oops, fixed.

> Seems like a #define for the max number of digits
> in a ForkNumber wouldn't be out of place (though I sure hope it never
> gets past 1 ...).

Done. It makes it more obvious how the buffer length is calculated than
using plain numbers.

> Also, I strongly suggest *not* appending _0 to the name of the main fork's
> file. This'd break on-disk compatibility and people's expectations,
> for no particular reason.

Done.

> Don't like the name NUM_FORKS; it seems to suggest that's the actual
> number of forks in existence. MAX_NUM_FORKS would be better.

I don't like MAX_NUM_FORKS. I renamed it to MAX_FORKNUM (and changed its
semantics accordingly).

> I think that setting InvalidForkNumber to -1 is unportable: there is
> nothing compelling the compiler to represent enum ForkNumber as a signed
> type, so the places where you assume a ForkNumber variable can hold
> InvalidForkNumber all look like trouble. One solution is to include
> InvalidForkNumber in the enum:
>
> enum ForkNumber {
> InvalidForkNumber = -1,
> MAIN_FORKNUM = 0
> };
>
> This would be a bit annoying if someone wrote a switch statement listing
> different possible fork numbers, as the compiler would complain if
> there's no case for InvalidForkNumber; but I can't see a reason for code
> like that to be common.

I chose this approach. There's no switch constructs like that at the
moment, and I don't see any immediate need for one. In fact, at the
moment InvalidForkNumber is only used in bgwriter database fsync
requests, where the fork number field doesn't mean anything.

> BTW, don't put a comma at end of enum ForkNumber, I think some compilers
> will barf on that.

Done.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

Re: [GENERAL] eliminating records not in (select id ... so SLOW?

Ivan Sergio Borgonovo wrote:

> But what if I *really* had to execute that query?
> Any other magic I could play to speed it up?

A trick that is sometimes spectacularly efficient is to rewrite the
query to use an outer join instead of NOT IN.

Try:

DELETE FROM table1 WHERE id IN
(SELECT table1.id FROM table1 LEFT JOIN table2 ON table1.id=table2.id
WHERE table2.id IS NULL)


Best regards,
--
Daniel
PostgreSQL-powered mail user agent and storage:
http://www.manitou-mail.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[SQL] regexp_replace

Hi all. I'd like to know whether it's possible to reverse the
behaviour of regexp_replace, meaning :
now if I do
SELECT regexp_replace ('foobarbaz', 'b..', 'X') I get 'fooXbaz' - it
replaces the string that matches given pattern with 'X', how do I
achieve the opposite - replace the string that doesn't match the
pattern ?

regards
mk

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [PERFORM] Database size Vs performance degradation

On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:
> Maybe I'm wrong but if this "bulk insert and delete" process is cyclical then
> You don't need vacuum full.
> Released tuples will fill up again with fresh data next day - after regular
> vacuum.

Yes, a regular manual vacuum will prevent the table from growing more than
it needs to. However, a vacuum full is required to actually reduce the
size of the table from 7.5G to 2.7G if that hasn't been done on the
production system already.

Matthew

--
It's one of those irregular verbs - "I have an independent mind," "You are
an eccentric," "He is round the twist."
-- Bernard Woolly, Yes Prime Minister

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[GENERAL] function definition and "entity"

I've to refactor a bunch of functions.
Some of them are called inside other functions.
Most of them could be changed through create or replace so...
calling function should still refer to the right newer function.

But some had to be dropped because the change was in output param.

Is there a way to automatically check if all references are to the
correct existing function?

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Postgresql not using an index

2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
> Hi Pavel,
>
> Isn't the text for the Verbose Explain analyze not enough?
> Is not, how can i generate it?
>
> --


no, I am missing statistics info

try
EXPLAIN ANALYZE SELECT .....

regards
Pavel Stehule
> Best regards,
>
> Marc
>
> Pavel Stehule schreef:
>>
>> Hello
>>
>> please, send EXPLAIN ANALYZE output.
>>
>> regards
>> Pavel Stehule
>>
>> 2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
>>>
>>> Hi,
>>>
>>> I'm using postgres 7.4 and bacula 1.38 on debian.
>>>
>>> In the bacula database there is a table named 'file' which has about 2.5
>>> million rows.
>>> In this table there is a field 'jobid' which is indexed.
>>> The index is created with the following command:
>>> CREATE INDEX file_jobid_idx ON file USING btree (jobid);
>>>
>>> The query:
>>> SELECT * from file where jobid=2792
>>>
>>> does a full scan and to my opinion it doesn't use the index.
>>> I already did a VACUUM ANALYZE on the database.
>>>
>>>
>>> Somebody an idea?
>>>
>>> EXPLAIN tells the following:
>>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>>> time=103215.145..161153.664 rows=1 loops=1)
>>> Filter: (jobid = 2792)
>>> Total runtime: 161154.734 ms
>>>
>>> The Verbose Explain tells the following:
>>> {SEQSCAN
>>> :startup_cost 0.00
>>> :total_cost 707683.30
>>> :plan_rows 207562
>>> :plan_width 110
>>> :targetlist (
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 1
>>> :restype 23
>>> :restypmod -1
>>> :resname fileid
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 1
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 1
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 1
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 2
>>> :restype 23
>>> :restypmod -1
>>> :resname fileindex
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 2
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 2
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 2
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 3
>>> :restype 23
>>> :restypmod -1
>>> :resname jobid
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 3
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 3
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 3
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 4
>>> :restype 23
>>> :restypmod -1
>>> :resname pathid
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 4
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 4
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 4
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 5
>>> :restype 23
>>> :restypmod -1
>>> :resname filenameid
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 5
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 5
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 5
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 6
>>> :restype 23
>>> :restypmod -1
>>> :resname markid
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 6
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 6
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 6
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 7
>>> :restype 25
>>> :restypmod -1
>>> :resname lstat
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 7
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 7
>>> :vartype 25
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 7
>>> }
>>> }
>>>
>>> {TARGETENTRY
>>> :resdom
>>> {RESDOM
>>> :resno 8
>>> :restype 25
>>> :restypmod -1
>>> :resname md5
>>> :ressortgroupref 0
>>> :resorigtbl 440806231
>>> :resorigcol 8
>>> :resjunk false
>>> }
>>>
>>> :expr
>>> {VAR
>>> :varno 1
>>> :varattno 8
>>> :vartype 25
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 8
>>> }
>>> }
>>> )
>>>
>>> :qual (
>>> {OPEXPR
>>> :opno 96
>>> :opfuncid 65
>>> :opresulttype 16
>>> :opretset false
>>> :args (
>>> {VAR
>>> :varno 1
>>> :varattno 3
>>> :vartype 23
>>> :vartypmod -1
>>> :varlevelsup 0
>>> :varnoold 1
>>> :varoattno 3
>>> }
>>>
>>> {CONST
>>> :consttype 23
>>> :constlen 4
>>> :constbyval true
>>> :constisnull false
>>> :constvalue 4 [ -24 10 0 0 ]
>>> }
>>> )
>>> }
>>> )
>>>
>>> :lefttree <>
>>> :righttree <>
>>> :initPlan <>
>>> :extParam ()
>>>
>>> :allParam ()
>>>
>>> :nParamExec 0
>>> :scanrelid 1
>>> }
>>>
>>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>>> time=103215.145..161153.664 rows=1 loops=1)
>>> Filter: (jobid = 2792)
>>> Total runtime: 161154.734 ms
>>>
>>>
>>> --
>>> Best regards,
>>>
>>> Marc
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [SQL] How to change a view's owner in postgres

On Fri, Aug 1, 2008 at 11:41 AM, Anoop G <anoopmadavoor@gmail.com> wrote:
> How to change a view's owner in postgres?

ALTER TABLE view_name OWNER TO new_owner;

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[SQL] How to change a view's owner in postgres(is it possible?)



Dear all,

We can change the owner of a tbale like this

alter table tbl_year_end owner to anoop;

Is it possible to change the owner name of a  view through sql?



[COMMITTERS] pgbouncer - pgbouncer: change ver to 1.2.1

Log Message:
-----------
change ver to 1.2.1

Modified Files:
--------------
pgbouncer:
NEWS (r1.23 -> r1.24)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/NEWS.diff?r1=1.23&r2=1.24)
pgbouncer/debian:
changelog (r1.13 -> r1.14)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/debian/changelog.diff?r1=1.13&r2=1.14)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgbouncer - pgbouncer: new drop_on_error param

Log Message:
-----------
new drop_on_error param

Modified Files:
--------------
pgbouncer/doc:
config.txt (r1.10 -> r1.11)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/doc/config.txt.diff?r1=1.10&r2=1.11)
todo.txt (r1.28 -> r1.29)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/doc/todo.txt.diff?r1=1.28&r2=1.29)
pgbouncer/include:
bouncer.h (r1.17 -> r1.18)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/include/bouncer.h.diff?r1=1.17&r2=1.18)
pgbouncer/src:
main.c (r1.43 -> r1.44)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/main.c.diff?r1=1.43&r2=1.44)
objects.c (r1.47 -> r1.48)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/objects.c.diff?r1=1.47&r2=1.48)
server.c (r1.28 -> r1.29)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/src/server.c.diff?r1=1.28&r2=1.29)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] Postgresql not using an index

Hi Pavel,

Isn't the text for the Verbose Explain analyze not enough?
Is not, how can i generate it?

--
Best regards,

Marc

Pavel Stehule schreef:
> Hello
>
> please, send EXPLAIN ANALYZE output.
>
> regards
> Pavel Stehule
>
> 2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
>> Hi,
>>
>> I'm using postgres 7.4 and bacula 1.38 on debian.
>>
>> In the bacula database there is a table named 'file' which has about 2.5
>> million rows.
>> In this table there is a field 'jobid' which is indexed.
>> The index is created with the following command:
>> CREATE INDEX file_jobid_idx ON file USING btree (jobid);
>>
>> The query:
>> SELECT * from file where jobid=2792
>>
>> does a full scan and to my opinion it doesn't use the index.
>> I already did a VACUUM ANALYZE on the database.
>>
>>
>> Somebody an idea?
>>
>> EXPLAIN tells the following:
>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>> time=103215.145..161153.664 rows=1 loops=1)
>> Filter: (jobid = 2792)
>> Total runtime: 161154.734 ms
>>
>> The Verbose Explain tells the following:
>> {SEQSCAN
>> :startup_cost 0.00
>> :total_cost 707683.30
>> :plan_rows 207562
>> :plan_width 110
>> :targetlist (
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 1
>> :restype 23
>> :restypmod -1
>> :resname fileid
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 1
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 1
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 1
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 2
>> :restype 23
>> :restypmod -1
>> :resname fileindex
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 2
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 2
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 2
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 3
>> :restype 23
>> :restypmod -1
>> :resname jobid
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 3
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 3
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 3
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 4
>> :restype 23
>> :restypmod -1
>> :resname pathid
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 4
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 4
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 4
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 5
>> :restype 23
>> :restypmod -1
>> :resname filenameid
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 5
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 5
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 5
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 6
>> :restype 23
>> :restypmod -1
>> :resname markid
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 6
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 6
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 6
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 7
>> :restype 25
>> :restypmod -1
>> :resname lstat
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 7
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 7
>> :vartype 25
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 7
>> }
>> }
>>
>> {TARGETENTRY
>> :resdom
>> {RESDOM
>> :resno 8
>> :restype 25
>> :restypmod -1
>> :resname md5
>> :ressortgroupref 0
>> :resorigtbl 440806231
>> :resorigcol 8
>> :resjunk false
>> }
>>
>> :expr
>> {VAR
>> :varno 1
>> :varattno 8
>> :vartype 25
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 8
>> }
>> }
>> )
>>
>> :qual (
>> {OPEXPR
>> :opno 96
>> :opfuncid 65
>> :opresulttype 16
>> :opretset false
>> :args (
>> {VAR
>> :varno 1
>> :varattno 3
>> :vartype 23
>> :vartypmod -1
>> :varlevelsup 0
>> :varnoold 1
>> :varoattno 3
>> }
>>
>> {CONST
>> :consttype 23
>> :constlen 4
>> :constbyval true
>> :constisnull false
>> :constvalue 4 [ -24 10 0 0 ]
>> }
>> )
>> }
>> )
>>
>> :lefttree <>
>> :righttree <>
>> :initPlan <>
>> :extParam ()
>>
>> :allParam ()
>>
>> :nParamExec 0
>> :scanrelid 1
>> }
>>
>> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
>> time=103215.145..161153.664 rows=1 loops=1)
>> Filter: (jobid = 2792)
>> Total runtime: 161154.734 ms
>>
>>
>> --
>> Best regards,
>>
>> Marc
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgadmin-hackers] Support for integrated tsearch configuration

Dave Page a écrit :
> On Sat, Jun 28, 2008 at 1:54 PM, Guillaume Lelarge
> [...]
>> I was sure I was following the "collection nodes and objects nodes" way of
>> doing. Here are some ascii artwork:
>>
>> - Databases (3)
>> + DB1
>> + DB2
>> - DB3
>> + Catalogs (2)
>> + Casts (0)
>> + Languages (0)
>> + Schemas (1)
>> + Replication (0)
>> - Text Search
>> + Configurations (2)
>> + Dictionaries (2)
>> + Parsers (2)
>> + Templates (2)
>>
>> Seems you're right. The Text search node is neither a collection neither an
>> object one. Unfortunately, I don't have a better one.
>
> Logically I think the design is correct - the only 'more correct' way
> would be to put the objects directly under a database, but that would
> be quite ugly I think.
>
> What you propose should work - it just might need a little thought.
> Maybe we need a 'pgGroupObject' node type. Something to ponder on in
> spare minutes rather than your 5 days I suspect :-)
>

I worked three days on it. The fourth was more about testing it on
different platforms (GTK, Windows, Mac). Now, It's working. I don't
attach the patch because it's really big, but here is a URL to get it
compressed:

http://developer.pgadmin.org/~guillaume/fts.patch.bz2

I was wrong on the layout of the text search objects inside the
treeview. They all belong to a schema, so they are part of a schema
node. You can see some screenshots on my blog:

* the treeview:
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts5.png

* text search configuration dialog:
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts3.png
(not uptodate screenshot, it lacks the Tokens tab)
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts9.png

* text search dictionary dialog:
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts7.png
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts8.png
(not uptodate screenshot, the add and changes buttons merge in a
single one)

* text search parser dialog:
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts6.png

* text search template dialog:
http://blog.guillaume.lelarge.info/public/pgadmin_fts/fts4.png

They all are resizable, so the ones that use wxListCtrl widget suffer
the Mac issue.

Comments?


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgsql-es-ayuda] sobre savepoints

--------------------------------------------------
From: "Alvaro Herrera" <alvherre@alvh.no-ip.org>
Sent: Thursday, July 31, 2008 9:19 PM
To: "Linos" <info@linos.es>
Cc: "Lista PostgreSql" <pgsql-es-ayuda@postgresql.org>
Subject: Re: [pgsql-es-ayuda] sobre savepoints

> Linos escribió:
>> tambien podria dentro de la funcion quizas lanzar primero el select y
>> segun lanzar un insert o un update, no? deberia ser mas rapido que el
>> savepoint? lo que no se como resolver en plpsql es el tema del numero de
>> columnas variable pero supongo q habra alguna forma, no?
>
> El problema es que para hacer un select para verificar si necesitas
> update o insert, necesitarias bloquear la tabla de antemano; de lo
> contrario puede pasar que hagas el select, diga que no esta el registro,
> y justo algun otro proceso lo inserte antes que tu alcances a
> insertarlo.
>
> Si no te complica bloquear la tabla, entonces creo que este
> procedimiento seria lo mas rapido. (Digo "creo" porque es posible que
> la otra alternativa es hacerlo con un "upsert" usando un savepoint. Hay
> un procedimiento de ejemplo de esto en la documentacion de Postgres. La
> gracia del upsert es que solo tienes que hacer un recorrido de la tabla
> en el caso que funcione a la primera; en cambio si bloqueas la tabla
> tienes que hacer primero el select y despues el insert o update, o sea
> son dos recorridos en todos los casos. Sin embargo tiene la desventaja
> de tener que crear y destruir el savepoint por cada registro).
>


Alvaro ... el UPSERT lo veo dentro de los TODO de postgresql .... o estoy
equivocado?

Atentamente,

RAUL DUQUE
Bogotá, Colombia

> --
> Alvaro Herrera
> http://www.PlanetPostgreSQL.org/
> "After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL!
> PostgreSQL was
> amazing when I first started using it at 7.2, and I'm continually
> astounded by
> learning new features and techniques made available by the continuing work
> of
> the development team."
> Berend Tober,
> http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php
> --
> TIP 4: No hagas 'kill -9' a postmaster
>
> __________ Information from ESET Smart Security, version of virus
> signature database 3316 (20080731) __________
>
> The message was checked by ESET Smart Security.
>
> http://www.eset.com
>
>
--
TIP 9: visita nuestro canal de IRC #postgresql-es en irc.freenode.net

Re: [GENERAL] Postgresql not using an index

Marc --

> Hi,
>
> I'm using postgres 7.4 and bacula 1.38 on debian.
>
> In the bacula database there is a table named 'file' which has about 2.5
> million rows.
> In this table there is a field 'jobid' which is indexed.
> The index is created with the following command:
>       CREATE INDEX file_jobid_idx  ON file  USING btree  (jobid);
>
> The query:
>       SELECT * from file where jobid=2792
>
> does a full scan and to my opinion it doesn't use the index.
> I already did a VACUUM ANALYZE on the database.
>
> Somebody an idea?
>
> EXPLAIN tells the following:
> Seq Scan on file  (cost=0.00..707683.30 rows=207562 width=110) (actual
> time=103215.145..161153.664 rows=1 loops=1)
>    Filter: (jobid = 2792)
> Total runtime: 161154.734 ms

Perhaps jobid is not an int -- maybe a bigint ? In which case the types don't match and the index won't be used (newer versions might do ok but 7.4 won't IIRC).

Perhaps we could see the table description from \d in the psql tool ?

HTH,

Greg Williamson
Senior DBA
DigitalGlobe

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)

Re: [pgadmin-hackers] Minimum height of server status dialog

Magnus Hagander a écrit :
> Guillaume Lelarge wrote:
>>> The hotkey stuff I was working on already, but I'll just use your patch
>>> instead. It seems to be working fine in on my machine as well, so I'll
>>> go ahead and commit it. One thing first though...
>>>
>>> When I ran embed-xrc, I got a diff containing:
>>> Index: ui/xrcDialogs.cpp
>>> ===================================================================
>>> --- ui/xrcDialogs.cpp (revision 7394)
>>> +++ ui/xrcDialogs.cpp (working copy)
>>> @@ -13,6 +13,14 @@
>>> #include <wx/xrc/xmlres.h>
>>> #include <wx/xrc/xh_all.h>
>>>
>>> +#if wxCHECK_VERSION(2,8,5) && wxABI_VERSION >= 20805
>>> + #define XRC_ADD_FILE(name, data, size, mime) \
>>> + wxMemoryFSHandler::AddFileWithMimeType(name, data, size, mime)
>>> +#else
>>> + #define XRC_ADD_FILE(name, data, size, mime) \
>>> + wxMemoryFSHandler::AddFile(name, data, size)
>>> +

Re: [GENERAL] Postgresql not using an index

Hello

please, send EXPLAIN ANALYZE output.

regards
Pavel Stehule

2008/8/1 Marc Cuypers <m.cuypers@mgvd.be>:
> Hi,
>
> I'm using postgres 7.4 and bacula 1.38 on debian.
>
> In the bacula database there is a table named 'file' which has about 2.5
> million rows.
> In this table there is a field 'jobid' which is indexed.
> The index is created with the following command:
> CREATE INDEX file_jobid_idx ON file USING btree (jobid);
>
> The query:
> SELECT * from file where jobid=2792
>
> does a full scan and to my opinion it doesn't use the index.
> I already did a VACUUM ANALYZE on the database.
>
>
> Somebody an idea?
>
> EXPLAIN tells the following:
> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
> time=103215.145..161153.664 rows=1 loops=1)
> Filter: (jobid = 2792)
> Total runtime: 161154.734 ms
>
> The Verbose Explain tells the following:
> {SEQSCAN
> :startup_cost 0.00
> :total_cost 707683.30
> :plan_rows 207562
> :plan_width 110
> :targetlist (
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 1
> :restype 23
> :restypmod -1
> :resname fileid
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 1
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 1
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 1
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 2
> :restype 23
> :restypmod -1
> :resname fileindex
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 2
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 2
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 2
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 3
> :restype 23
> :restypmod -1
> :resname jobid
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 3
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 3
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 3
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 4
> :restype 23
> :restypmod -1
> :resname pathid
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 4
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 4
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 4
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 5
> :restype 23
> :restypmod -1
> :resname filenameid
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 5
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 5
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 5
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 6
> :restype 23
> :restypmod -1
> :resname markid
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 6
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 6
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 6
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 7
> :restype 25
> :restypmod -1
> :resname lstat
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 7
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 7
> :vartype 25
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 7
> }
> }
>
> {TARGETENTRY
> :resdom
> {RESDOM
> :resno 8
> :restype 25
> :restypmod -1
> :resname md5
> :ressortgroupref 0
> :resorigtbl 440806231
> :resorigcol 8
> :resjunk false
> }
>
> :expr
> {VAR
> :varno 1
> :varattno 8
> :vartype 25
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 8
> }
> }
> )
>
> :qual (
> {OPEXPR
> :opno 96
> :opfuncid 65
> :opresulttype 16
> :opretset false
> :args (
> {VAR
> :varno 1
> :varattno 3
> :vartype 23
> :vartypmod -1
> :varlevelsup 0
> :varnoold 1
> :varoattno 3
> }
>
> {CONST
> :consttype 23
> :constlen 4
> :constbyval true
> :constisnull false
> :constvalue 4 [ -24 10 0 0 ]
> }
> )
> }
> )
>
> :lefttree <>
> :righttree <>
> :initPlan <>
> :extParam ()
>
> :allParam ()
>
> :nParamExec 0
> :scanrelid 1
> }
>
> Seq Scan on file (cost=0.00..707683.30 rows=207562 width=110) (actual
> time=103215.145..161153.664 rows=1 loops=1)
> Filter: (jobid = 2792)
> Total runtime: 161154.734 ms
>
>
> --
> Best regards,
>
> Marc
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Plans for 8.4

Henry B. Hotz wrote:
>
> On Jul 31, 2008, at 7:58 AM, Magnus Hagander wrote:
>
>> Stephen Frost wrote:
>>> * Henry B. Hotz (hbhotz@oxy.edu) wrote:
>>>> I'm making no promises, but what would people think of a hostgss hba
>>>> option?
>>>
>>> As described, sounds like a win to me. It'd be very nice to be able to
>>> just use GSSAPI encryption on the link. That, combined w/ Magnus' work
>>> on username/princ mappings, would really bring PostgreSQL up to date wrt
>>> GSSAPI support.
>>
>> Yeah, +1 on this feature, it would be quite useful.
>>
>>
>>> It'd really be great to have this support in the ODBC and JDBC drivers
>>> too.. I think in JDBC it might 'just work', I'm less sure about ODBC.
>>
>> ODBC will need hackery I think. They use libpq for authentication only,
>> but have their own SSL code and such. I do think ODBC would be a fairly
>> major point to it being a success, though, so it'd be good if a plan
>> could be secured for it. But it's not a showstopper, of course.
>
> I don't know enough about ODBC. If ODBC does SSL independently of PG
> then it requires thought by someone who understands ODBC.

I just meant please consider coordinating with the ODBC folks to make
sure it gets in there as well - and in time for the same release.


>>> As a practical question- would you really need a seperate explicit
>>> pg_hba option for it? It'd be nice to be able to require it, if
>>> desired, but that strikes me as more sensible as an option to the 'gss'
>>> auth mechanism?
>>
>> Yeah, if we can get rid of that, that'd be good. The stuff I'm working
>> on will allow us to have multiple parameters for each row in name/value
>> pairs, so if we could use that, it'd be better. (I've been considering
>> changing how host/hostssl work that way as well - by having a parameter
>> similar to what we have on the client side with sslmode=...)
>>
>> A thought that I came across - is it even possible to use GSSAPI
>> encryption *without* using GSSAPI authentication? If not, it really
>> seems like it should belong more in the parameter part of the field.
>> Since in that case it is also not possible to enable encryption *before*
>> authentication, or is it?
>
> You're on the right track. My problem isn't the hba file parsing at all.
>
> My problem is the interaction between the buffering logic and the
> encrypted I/O routines. The technical issue is that to make a GSSAPI
> security layer independent of SSL you need to invent a whole new
> buffering layer. That's a lot of work, and it only buys you the ability
> to do both SSL and GSSAPI at the same time. That doesn't seem worth it.

Yeah, there seems to be no general point in that.

However, implementing a layer there might have other benefits. Such as
being able to use other SSL implementations (right now we only do
OpenSSL. There has been talk about GnuTLS, and it would be good to be
able to do schannel)

> The code being affected is what's currently configured in column 1 of
> hba. The ability to use the new capability requires that SSL *NOT* be
> configured in column 1 for the relevant client addresses. In short, no,
> it doesn't make sense to make it an option to the gss authentication
> method, even though it requires it. If we make it an option to the gss
> authentication method it would still need to act like it was specified
> in column 1, which would be confusing.

Does this hold even if we move the "hostssl" stuff into a parameter "at
the end"? I was thinking maybe something like:
host all all 0.0.0.0/0 md5 ssl=require
host all all 0.0.0.0/0 gss ssl=forbid gssencrypt=require

(you get the idea)


> GSSAPI security layers are negotiated after the authentication (or at
> least after the start of authentication). There are GSSAPI status flags
> that indicate if the security layer is available yet. The GSSAPI
> security layer code would check those flags and gss_wrap() or not
> accordingly. (-: There's a flush() or two from my original patch that
> will need to be added back in, otherwise we'll encrypt a message that
> tells the other end how to decrypt messages. Not a big deal. ;-)

Ok, makes sense.

//Magnus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers