Sunday, September 21, 2008

Re: [HACKERS] [patch] fix dblink security hole

Joe Conway <mail@joeconway.com> writes:
> If the return value is NULL, use errmsg if you'd like. I'd guess in most
> instances you don't even need to bother freeing errmsg as it is in a
> limited life memory context.

Uh, you're confusing the backend environment with libpq's much more
spartan lifestyle. errmsg will be malloc'd and it will *not* go away
unless the caller free()s it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

Tom Lane wrote:
> Hmm ... one problem with this is that the caller can't tell
> failure-because-out-of-memory from failure-because-string-is-bogus.

<snip>

> Is it worth having the
> PQconninfoParse function pass back the error message to avoid this
> corner case?

I thought briefly about it, and wasn't sure it would be worth the ugliness.

> The API would be a lot more ugly, perhaps

> int PQconninfoParse(const char *connstr,
> PQconninfoOption **options,
> char **errmsg)
>
> okay: *options is set, *errmsg is NULL, return true
> bogus string: *options is NULL, *errmsg is set, return false
> out of memory: both outputs NULL, return false

conninfo_parse() returns NULL on error, so why not something like:

PQconninfoOption *
PQconninfoParse(const char *conninfo, char **errmsg)
{
PQExpBufferData errorBuf;
bool password_from_string;
PQconninfoOption *connOptions;

initPQExpBuffer(&errorBuf);
connOptions = conninfo_parse(conninfo, &errorBuf,
&password_from_string);

if (!connOptions && errmsg)
*errmsg = pstrdup(errorBuf.data);

termPQExpBuffer(&errorBuf);
return connOptions;
}

If the return value is NULL, use errmsg if you'd like. I'd guess in most
instances you don't even need to bother freeing errmsg as it is in a
limited life memory context.

Joe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

Joe Conway <mail@joeconway.com> writes:
> Tom Lane wrote:
>> So that seems to tilt the decision towards exposing the conninfo_parse
>> function. Joe, do you want to have a go at it, or shall I?

> Here's a first shot.

Hmm ... one problem with this is that the caller can't tell
failure-because-out-of-memory from failure-because-string-is-bogus.
That doesn't matter for your proposed dblink patch, but I had been
thinking of documenting that if someone wanted to get an error message
explaining just what was wrong with the conninfo string, they could
try to open a connection with it and use the resulting failure message.
But it's just barely conceivable that the PQconnect call *wouldn't* fail
because out-of-memory. (Not very likely in a sequential application,
but definitely seems possible in a multithreaded app --- some other
thread could release memory meanwhile.) Is it worth having the
PQconninfoParse function pass back the error message to avoid this
corner case? The API would be a lot more ugly, perhaps

int PQconninfoParse(const char *connstr,
PQconninfoOption **options,
char **errmsg)

okay: *options is set, *errmsg is NULL, return true
bogus string: *options is NULL, *errmsg is set, return false
out of memory: both outputs NULL, return false

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

Index: contrib/dblink/dblink.c
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/dblink.c,v
retrieving revision 1.74
diff -c -r1.74 dblink.c
*** contrib/dblink/dblink.c 3 Jul 2008 03:56:57 -0000 1.74
--- contrib/dblink/dblink.c 22 Sep 2008 00:34:17 -0000
***************
*** 93,98 ****
--- 93,99 ----
static HeapTuple get_tuple_of_interest(Oid relid, int2vector *pkattnums, int16 pknumatts, char **src_pkattvals);
static Oid get_relid_from_relname(text *relname_text);
static char *generate_relation_name(Oid relid);
+ static void dblink_connstr_check(const char *connstr);
static void dblink_security_check(PGconn *conn, remoteConn *rconn);
static void dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail);

***************
*** 165,170 ****
--- 166,172 ----
else \
{ \
connstr = conname_or_str; \
+ dblink_connstr_check(connstr); \
conn = PQconnectdb(connstr); \
if (PQstatus(conn) == CONNECTION_BAD) \
{ \
***************
*** 229,234 ****
--- 231,239 ----

if (connname)
rconn = (remoteConn *) palloc(sizeof(remoteConn));
+
+ /* check password in connection string if not superuser */
+ dblink_connstr_check(connstr);
conn = PQconnectdb(connstr);

MemoryContextSwitchTo(oldcontext);
***************
*** 246,252 ****
errdetail("%s", msg)));
}

! /* check password used if not superuser */
dblink_security_check(conn, rconn);

if (connname)
--- 251,257 ----
errdetail("%s", msg)));
}

! /* check password actually used if not superuser */
dblink_security_check(conn, rconn);

if (connname)
***************
*** 2252,2257 ****
--- 2257,2293 ----
}

static void
+ dblink_connstr_check(const char *connstr)
+ {
+ if (!superuser())
+ {
+ PQconninfoOption *options;
+ PQconninfoOption *option;
+ bool conn_used_password = false;
+
+ options = PQconninfoParse(connstr);
+ for (option = options; option->keyword != NULL; option++)
+ {
+ if (strcmp(option->keyword, "password") == 0)
+ {
+ if (option->val != NULL && option->val[0] != '\0')
+ {
+ conn_used_password = true;
+ break;
+ }
+ }
+ }
+ PQconninfoFree(options);
+
+ if (!conn_used_password)
+ ereport(ERROR,
+ (errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED),
+ errmsg("password is required"),
+ errdetail("Non-superuser must provide a password in the connection string.")));
+ }
+ }
+
+ static void
dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail)
{
int level;
Index: doc/src/sgml/libpq.sgml
===================================================================
RCS file: /opt/src/cvs/pgsql/doc/src/sgml/libpq.sgml,v
retrieving revision 1.263
diff -c -r1.263 libpq.sgml
*** doc/src/sgml/libpq.sgml 19 Sep 2008 20:06:13 -0000 1.263
--- doc/src/sgml/libpq.sgml 21 Sep 2008 23:08:27 -0000
***************
*** 625,630 ****
--- 625,658 ----
</varlistentry>

<varlistentry>
+ <term><function>PQconninfoParse</function><indexterm><primary>PQconninfoParse</></></term>
+ <listitem>
+ <para>
+ Returns parsed connection options from the provided connection string.
+
+ <synopsis>
+ PQconninfoOption *PQconninfoParse(const char *conninfo);
+ </synopsis>
+
+ <para>
+ Returns a connection options array. This can be used to determine
+ the <function>PQconnectdb</function> options in the provided
+ connection string. The return value points to an array of
+ <structname>PQconninfoOption</structname> structures, which ends
+ with an entry having a null <structfield>keyword</> pointer. The
+ null pointer is returned if memory could not be allocated.
+ </para>
+
+ <para>
+ After processing the options array, free it by passing it to
+ <function>PQconninfoFree</function>. If this is not done, a small amount of memory
+ is leaked for each call to <function>PQconndefaults</function>.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><function>PQfinish</function><indexterm><primary>PQfinish</></></term>
<listitem>
<para>
Index: src/interfaces/libpq/exports.txt
===================================================================
RCS file: /opt/src/cvs/pgsql/src/interfaces/libpq/exports.txt,v
retrieving revision 1.21
diff -c -r1.21 exports.txt
*** src/interfaces/libpq/exports.txt 19 Sep 2008 20:06:13 -0000 1.21
--- src/interfaces/libpq/exports.txt 21 Sep 2008 23:32:56 -0000
***************
*** 151,153 ****
--- 151,154 ----
PQresultInstanceData 149
PQresultSetInstanceData 150
PQfireResultCreateEvents 151
+ PQconninfoParse 152
Index: src/interfaces/libpq/fe-connect.c
===================================================================
RCS file: /opt/src/cvs/pgsql/src/interfaces/libpq/fe-connect.c,v
retrieving revision 1.360
diff -c -r1.360 fe-connect.c
*** src/interfaces/libpq/fe-connect.c 17 Sep 2008 04:31:08 -0000 1.360
--- src/interfaces/libpq/fe-connect.c 21 Sep 2008 22:56:40 -0000
***************
*** 3101,3106 ****
--- 3101,3127 ----
return 0;
}

+ /*
+ * PQconninfoParse
+ *
+ * Parse a string like PQconnectdb() would do and return the
+ * working connection options array.
+ *
+ * NOTE: the returned array is dynamically allocated and should
+ * be freed when no longer needed via PQconninfoFree().
+ */
+ PQconninfoOption *
+ PQconninfoParse(const char *conninfo)
+ {
+ PQExpBufferData errorBuf;
+ bool password_from_string;
+ PQconninfoOption *connOptions;
+
+ initPQExpBuffer(&errorBuf);
+ connOptions = conninfo_parse(conninfo, &errorBuf, &password_from_string);
+ termPQExpBuffer(&errorBuf);
+ return connOptions;
+ }

/*
* Conninfo parser routine
Index: src/interfaces/libpq/libpq-fe.h
===================================================================
RCS file: /opt/src/cvs/pgsql/src/interfaces/libpq/libpq-fe.h,v
retrieving revision 1.143
diff -c -r1.143 libpq-fe.h
*** src/interfaces/libpq/libpq-fe.h 17 Sep 2008 04:31:08 -0000 1.143
--- src/interfaces/libpq/libpq-fe.h 21 Sep 2008 22:57:37 -0000
***************
*** 243,248 ****
--- 243,251 ----
/* get info about connection options known to PQconnectdb */
extern PQconninfoOption *PQconndefaults(void);

+ /* parse connection options in same way as PQconnectdb */
+ extern PQconninfoOption *PQconninfoParse(const char *conninfo);
+
/* free the data structure returned by PQconndefaults() */
extern void PQconninfoFree(PQconninfoOption *connOptions);

Tom Lane wrote:
> "Marko Kreen" <markokr@gmail.com> writes:
>> On 9/21/08, Joe Conway <mail@joeconway.com> wrote:
>>> Why? pg_service does not appear to support wildcards, so what is the attack
>>> vector?
>
>> "service=foo host=custom"
>
> The proposal to require a password = foo entry in the conn string seems
> to resolve all of these, without taking away useful capability. I don't
> think that forbidding use of services altogether is a good thing.
>
> So that seems to tilt the decision towards exposing the conninfo_parse
> function. Joe, do you want to have a go at it, or shall I?

Here's a first shot.

Notes:
1. I have not removed PQconnectionUsedPassword and related. It
is still needed to prevent a non-superuser from logging in
as the superuser if the server does not require authentication.
In that case, any bogus password could be added to the connection
string and be subsequently ignored, if not for this check.
2. I assume this ought to be applied as two separate commits --
one for libpq, and one for dblink.
3. I can't easily verify that I got libpq.sgml perfect; I've gotten out
of sync with the required tool chain for the docs

Comments?

Joe

Re: [pgsql-es-ayuda] CLAVES EN POSTGRES

On Fri, Sep 19, 2008 at 9:26 AM, Franz Marin <frarimava@hotmail.com> wrote:
>
> tengo una duda de si se puede hacer o no ... a mis usuarios de la base de
> datos en el momento de asignar el password siempre le agrego unos
> caracteres al primicipio y al final de la clave que digita el usuario
> y por la aplicacion cuando se va a conectar al motor hago lo mismo, pero
> estoy tratando de hacer lo siguiente ... que esos caracteres que agreago al
> principio y final de clave los pueda estar cambiando periodiacamente,
> entonces quisiera saber como puedo hacer en postgres con el super usuario
> para cambiar las caracteres que me conforman la clave en el el motor
>
> ejemplo
>
> la clave del usuario X es 123456 en el motor esta SG123456SG
>
> quiero cambiar la para que la clave para el usuario sera 123456 pero en el
> motor ASD123456ASD
>
> pero sin necesidad de que tenga que saberme todas las claves de todos los
> usuarios para hacer eso .... no se si se puede por medio del superusuario o
> de otra forma
>

no se puede. debes hacerlo en tu aplcacion... para que quieres hacer eso?

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [pgsql-es-ayuda] q pasa con 8.3.4??

2008/9/21 José Fermín Francisco Ferreras <josefermin54@hotmail.com>:
> Alguno d uds. sabe q pasa con la versión 8.3.4 q salió el día 19 de Sept. q
> no fue anunciada en la página oficial, ya q sólo la ponen en la sección d
> descargas??
>
> http://www.postgresql.org/ftp/binary/
>

aun no se a hecho el anuncio oficial, creo que el anuncio se hace mañana


--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

Re: [PATCHES] [PgFoundry] Unsigned Data Types [1 of 2]

On Mon, Sep 15, 2008 at 9:45 PM, Ryan Bradetich <rbradetich@gmail.com> wrote:
>>
>> I have the code and regression tests updated to solve the problems you initially
>> discovered. After code reading, stepping through with the debugger, and
>> help from RhodiumToad on irc I was able to implement new restrict
>> selective estimation
>> functions for the uint4 vs int4 data types. The uint1 vs int4 and
>> uint2 vs int4 data
>> types did not require a custom restrict selective estimation function.
>>

i'm still seeing the failures in the copy commands (the ones about the paths)

i'm not really sure if this matters.

contrib_regression=# select 256::int2::int4;
int4
------
256
(1 row)

contrib_regression=# select 256::uint2::int4;
int4
------
256
(1 row)

contrib_regression=# select 256::int2::uint4;
ERROR: cannot cast type smallint to uint4 at character 17
STATEMENT: select 256::int2::uint4;
ERROR: cannot cast type smallint to uint4
LINE 1: select 256::int2::uint4;


otherwise seems fine

--
regards,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Toasted table not deleted when no out of line columns left

"=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?=" <hs@cybertec.at> writes:
>> ... And implementing it would require introducing weird
>> corner cases into the tuple toaster, because it might now come across
>> TOAST pointers that point to a no-longer-existent table, and have to
>> consider that to be a no-op instead of an error condition.

> we will compile a patch within the next days to cover this case.

I'm not sure which part of "no" you didn't understand, but: I do not
believe this is worth making the toast code less robust for.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] Who is updating the PostgreSQL home page?

On Sunday 21 September 2008 14:16:47 Josh Berkus wrote:
> WWW team,
>
> Has someone taken charge of updating the PostgreSQL home page for the
> upcoming update release?
>
> Also -- please do not approve the news item I'm about to submit until
> 9am EDT Monday.
>

If you're doing the release at 9AM Monday, we need the website updated before
that, which won't work for me, so I'll defer to Dave or Magnus. If things
aren't in place by 10AM, I'll be online to help out.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [JDBC] PreparedStatement.toString() creates valid SQL; was: Q: use setObject also for int or string

import java.sql.*;
import java.io.ByteArrayInputStream;

public class PrintQuery {

public static void main(String args[]) throws Exception {

Class.forName("org.postgresql.Driver");

Connection conn = DriverManager.getConnection("jdbc:postgresql://localhost:5432/jurka", "jurka", "");
PreparedStatement ps = conn.prepareStatement("SELECT ?");

ps.setString(1, "NULL");
System.out.println(ps.toString());

ps.setString(1, "a\\b");
System.out.println(ps.toString());

byte data[] = new String("data").getBytes("US-ASCII");
ByteArrayInputStream bais = new ByteArrayInputStream(data);
ps.setBinaryStream(1, bais, data.length);
System.out.println(ps.toString());
System.out.println(ps.toString());

conn.close();
}
}

On Sun, 21 Sep 2008, Michael Enke wrote:

> I did this and it would be great if you could have a look into the
> attached patch files and make comments.
>
> It has to set standard_conforming_strings to false
> in any case to create valid sql:
> If I use setString(1, "a\\b");
> the real stored characters are 'a','\','b'.
> If I would use standard_conforming_strings=true
> the output would be "a\b"
> but with set to false the output is "a\\b" which is the correct one.

One of the backslashes is used for Java's escaping, so the real data does
only have one backslash which is the correct behavior for
standard_conforming_strings. Also note that E'' syntax is only supported
on 8.1 and later servers, so it can't be used unconditionally.

> I did not understand what you mean with InputStream.

See the attached test case.

Also note that the String "NULL" should be 'NULL', not plain NULL.

Kris Jurka

Re: [HACKERS] Assert Levels

Greg Smith <gsmith@gregsmith.com> writes:
> The next time I'm doing some performance testing I'll try to quantify how
> much damage the expensive ones do by playing with pg_config_manual.h.
> Normally I'm testing with 1GB+ of shared_buffers which makes the current
> assert scheme unusable.

There is a commit-time scan of the buffers for the sole purpose of
asserting a few things about their state. That's one of the things
we'd need to turn off for a "cheap asserts only" mode I think.

If you want to try to quantify what "cheap asserts" might do, you
should pull out the #ifdef USE_ASSERT_CHECKING code here:
check_list_invariants in list.c
the loops in AtEOXact_Buffers and AtEOXact_LocalBuffers
the loop in AtEOXact_CatCache
the test that makes AtEOXact_RelationCache scan the relcache
even when !need_eoxact_work
in addition to the memory-related stuff.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] parallel pg_restore

Tom Lane wrote:
> Andrew Dunstan <andrew@dunslane.net> writes:
>
>> I am working on getting parallel pg_restore working. I'm currently
>> getting all the scaffolding working, and hope to have a naive prototype
>> posted within about a week.
>>
>
>
>> The major question is how to choose the restoration order so as to
>> maximize efficiency both on the server and in reading the archive.
>>
>
> One of the first software design principles I ever learned was to
> separate policy from mechanism. ISTM in this first cut you ought to
> concentrate on mechanism and let the policy just be something dumb
> (but coded separately from the infrastructure). We can refine it after
> that.
>


Indeed, that's exactly what I'm doing. However, given that time for the
8.4 window is short, I thought it would be sensible to get people
thinking about what the policy might be, while I get on with the mechanism.

>
>> Another question is what we should do if the user supplies an explicit
>> order with --use-list. I'm inclined to say we should stick strictly with
>> the supplied order. Or maybe that should be an option.
>>
>
> Hmm. I think --use-list is used more for selecting a subset of items
> to restore than for forcing a nondefault restore order. Forcing the
> order used to be a major purpose, but that was years ago before we
> had the dependency-driven-restore-order code working. So I'd vote that
> the default behavior is to still allow parallel restore when this option
> is used, and we should provide an orthogonal option that disables use of
> parallel restore.
>
> You'd really want the latter anyway for some cases, ie, when you don't
> want the restore trying to hog the machine. Maybe the right form for
> the extra option is just a limit on how many connections to use. Set it
> to one to force the exact restore order, and to other values to throttle
> how much of the machine the restore tries to eat.
>

My intention is to have single-thread restore remain the default, at
least for this go round, and have the user be able to choose
--multi-thread=nn to specify the number of concurrent connections to use.

> One problem here though is that you'd need to be sure you behave sanely
> when there is a dependency chain passing through an object that's not to
> be restored. The ordering of the rest of the chain still ought to honor
> the dependencies I think.
>
>
>

Right. I think we'd need to fake doing a full restore and omit actually
restoring items not on the passed in list. That should be simple enough.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Assert Levels

On Fri, 19 Sep 2008, Tom Lane wrote:

> Well, there are certain things that --enable-cassert turns on that are
> outrageously expensive...I don't think anyone knows what the performance
> impact of just the regular Asserts is; it's been too long since these
> other things were stuck in there.

The next time I'm doing some performance testing I'll try to quantify how
much damage the expensive ones do by playing with pg_config_manual.h.
Normally I'm testing with 1GB+ of shared_buffers which makes the current
assert scheme unusable. If I could run with asserts on only only lose a
small amount of performance just by disabling the clobber and context
checking, that would be valuable to know. Right now I waste a fair amount
of time running performance and assert builds in parallel.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] parallel pg_restore

Andrew Dunstan <andrew@dunslane.net> writes:
> I am working on getting parallel pg_restore working. I'm currently
> getting all the scaffolding working, and hope to have a naive prototype
> posted within about a week.

> The major question is how to choose the restoration order so as to
> maximize efficiency both on the server and in reading the archive.

One of the first software design principles I ever learned was to
separate policy from mechanism. ISTM in this first cut you ought to
concentrate on mechanism and let the policy just be something dumb
(but coded separately from the infrastructure). We can refine it after
that.

> Another question is what we should do if the user supplies an explicit
> order with --use-list. I'm inclined to say we should stick strictly with
> the supplied order. Or maybe that should be an option.

Hmm. I think --use-list is used more for selecting a subset of items
to restore than for forcing a nondefault restore order. Forcing the
order used to be a major purpose, but that was years ago before we
had the dependency-driven-restore-order code working. So I'd vote that
the default behavior is to still allow parallel restore when this option
is used, and we should provide an orthogonal option that disables use of
parallel restore.

You'd really want the latter anyway for some cases, ie, when you don't
want the restore trying to hog the machine. Maybe the right form for
the extra option is just a limit on how many connections to use. Set it
to one to force the exact restore order, and to other values to throttle
how much of the machine the restore tries to eat.

One problem here though is that you'd need to be sure you behave sanely
when there is a dependency chain passing through an object that's not to
be restored. The ordering of the rest of the chain still ought to honor
the dependencies I think.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] q pasa con 8.3.4??

Alguno d uds. sabe q pasa con la versión 8.3.4 q salió el día 19 de Sept. q no fue anunciada en la página oficial, ya q sólo la ponen en la sección d descargas??

http://www.postgresql.org/ftp/binary/



<html><div>ing. José Fermín Francisco Ferreras <BR>San Francisco de Macorís, Rep. Dom. <BR></div></html>


Get more from your digital life. Find out how.

Re: [HACKERS] [patch] fix dblink security hole

Tom Lane wrote:
> "Marko Kreen" <markokr@gmail.com> writes:
>> On 9/21/08, Joe Conway <mail@joeconway.com> wrote:
>>> Why? pg_service does not appear to support wildcards, so what is the attack
>>> vector?
>
>> "service=foo host=custom"
>
> The proposal to require a password = foo entry in the conn string seems
> to resolve all of these, without taking away useful capability. I don't
> think that forbidding use of services altogether is a good thing.
>
> So that seems to tilt the decision towards exposing the conninfo_parse
> function. Joe, do you want to have a go at it, or shall I?

Agreed. I'll take a stab at it.

Joe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

"Marko Kreen" <markokr@gmail.com> writes:
> On 9/21/08, Joe Conway <mail@joeconway.com> wrote:
>> Why? pg_service does not appear to support wildcards, so what is the attack
>> vector?

> "service=foo host=custom"

The proposal to require a password = foo entry in the conn string seems
to resolve all of these, without taking away useful capability. I don't
think that forbidding use of services altogether is a good thing.

So that seems to tilt the decision towards exposing the conninfo_parse
function. Joe, do you want to have a go at it, or shall I?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl

"Alex Hunsaker" <badalex@gmail.com> writes:
> Hrm, I thought if anything we wanted to put them in pg_constraints (at
> least inherited ones). Now maybe I have defaults confused with NOT
> NULLs... But don't we want to be able to give defaults names and and
> such?

No, I think you're thinking of NOT NULL. There's no reason to have
names for defaults that I can see.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

On 9/21/08, Joe Conway <mail@joeconway.com> wrote:
> Marko Kreen wrote:
> > You need to ignore pg_service also. (And PGPASSWORD)
>
> Why? pg_service does not appear to support wildcards, so what is the attack
> vector?

"service=foo host=custom"

> And on PGPASSWORD, the fine manual says the following:
>
> PGPASSWORD sets the password used if the server demands password
> authentication. Use of this environment variable is not recommended
> for security reasons (some operating systems allow non-root users to
> see process environment variables via ps); instead consider using the
> ~/.pgpass file (see Section 30.13).

That does not mean it's OK to handle it insecurely.

If you want to solve the immediate problem with hack, then the cleanest
hack would be "no-external-sources-for-connection-details"-hack.

Leaving the less probable paths open is just sloppy attitude.

> At the moment the only real issue I can see is .pgpass when wildcards are
> used for hostname:port:database.

Well, the real issue is that lusers are allowed to freely launch
connections, that's the source for all the other problems.

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl

On Sun, Sep 21, 2008 at 11:09 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> A possible objection to this plan is that if the column-level privileges
> patch doesn't get in, then we're left with a useless column in
> pg_attribute. But an always-null column doesn't cost much of anything,
> and we know that sooner or later we will support per-column ACLs:
> they are clearly useful as well as required by spec. So any
> inefficiency would only be transient anyway.
>
> Thoughts, objections?

Hrm, I thought if anything we wanted to put them in pg_constraints (at
least inherited ones). Now maybe I have defaults confused with NOT
NULLs... But don't we want to be able to give defaults names and and
such?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

Joe Conway <mail@joeconway.com> writes:
> If we push the responsibility back to dblink, we might as well export
> conninfo_parse() or some wrapper thereof and let dblink simply check for
> a non-null password from the very beginning.

That's not totally unreasonable, since we already export the
PQconninfoOption struct ...

> Or perhaps we should modify conninfo_parse() to throw an error if it
> sees the same option more than once. Then dblink could prepend
> pgpassfile (or ignore_pgpass) to the beginning of the connstr and not
> have to worry about being overridden. Not sure if the backward
> compatibility hit is worth it though.

... but I think I like the second one better; multiple specifications of
an option seem like probably a programming error anyway. It's a close
call though. Exporting the parse code might enable other uses besides
this one.

In either case we'd still need a check after connection to verify that
the password actually got *used*, so I guess that
PQconnectionUsedPassword isn't dead, just incomplete.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Foreign key constraint for array-field?

> I strongly suspect you'd benefit a lot more by learning database best
> practices rather than assuming, as you appear to be doing, that you
> are dealing with a new field and that you know it best. Neither is true.
Of course, you absolutely right. I venerate you! Ommmm! :-)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Foreign key constraint for array-field?

On Sun, 2008-09-21 at 15:07 -0400, Andrew Dunstan wrote:
>
> Simon Riggs wrote:
> > No, its not possible. Need a trigger.
> >
> > I think we should support it though. If we extend the relational model
> > with arrays then it would be sensible if we support this aspect as
> > well.
> >
> > Implementation would be fairly straightforward. ri_triggers currently
> > assumes a non-array value is being checked, but that could be changed to
> > IN(array). Multi-column keys with arrays sound confusing though.
> >
>
> What's the syntax going to look like?

The ALTER TABLE would have exactly the same syntax.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

Tom Lane wrote:
> BTW, a possible hole in this scheme would be if a user could supply a
> conninfo string that was intentionally malformed in a way that would
> cause a tacked-on pgpassfile option to be ignored by libpq. We might
> need to add some validity checks to dblink, or tighten libpq's own
> checks.

If we push the responsibility back to dblink, we might as well export
conninfo_parse() or some wrapper thereof and let dblink simply check for
a non-null password from the very beginning.

Or perhaps we should modify conninfo_parse() to throw an error if it
sees the same option more than once. Then dblink could prepend
pgpassfile (or ignore_pgpass) to the beginning of the connstr and not
have to worry about being overridden. Not sure if the backward
compatibility hit is worth it though.

Joe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Predictable order of SQL commands in pg_dump

Great! Would it be implemented in a next version? Seems it would be
very helpful, especially for people who commit database structure to
CVS/SVN once per minute to track changes history (or similar)...


On Sun, Sep 21, 2008 at 11:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Dmitry Koterov" <dmitry@koterov.ru> writes:
>> CREATE TRIGGER t000_set_id
>> - BEFORE INSERT OR DELETE OR UPDATE ON a
>> + BEFORE INSERT OR DELETE OR UPDATE ON b
>> FOR EACH ROW
>> EXECUTE PROCEDURE i_trg();
>
>> CREATE TRIGGER t000_set_id
>> - BEFORE INSERT OR DELETE OR UPDATE ON b
>> + BEFORE INSERT OR DELETE OR UPDATE ON a
>> FOR EACH ROW
>> EXECUTE PROCEDURE i_trg();
>
>> You see, object names are the same, but ordering is mixed.
>
> Yeah, because the sort is just on object name.
>
> For objects of the same type I suppose we could take a look at their
> owning object's name too ...
>
> regards, tom lane
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Toasted table not deleted when no out of line columns left

>

*snip*

>> Judging from that, the toasted table
>> cleanup may be part of ALTER TABLE DROP COLUMN.
>
> That would only help if you were dropping the last potentially-
> toastable
> column of a table. And implementing it would require introducing
> weird
> corner cases into the tuple toaster, because it might now come across
> TOAST pointers that point to a no-longer-existent table, and have to
> consider that to be a no-op instead of an error condition.
>
> regards, tom lane


tom,

in our test case we had a table with 10 integer columns (nothing else)
along with a 10 gb toast table - this is why we were a little surprised.
in this case it can definitely be cleaned up.
it is clear that we definitely don't want to change columns directly
here when a column is dropped. - however, if there is not a single
toastable column left, we should definitely clean up.
we will compile a patch within the next days to cover this case.

many thanks,

hans

--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: www.postgresql-support.de


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Predictable order of SQL commands in pg_dump

"Dmitry Koterov" <dmitry@koterov.ru> writes:
> CREATE TRIGGER t000_set_id
> - BEFORE INSERT OR DELETE OR UPDATE ON a
> + BEFORE INSERT OR DELETE OR UPDATE ON b
> FOR EACH ROW
> EXECUTE PROCEDURE i_trg();

> CREATE TRIGGER t000_set_id
> - BEFORE INSERT OR DELETE OR UPDATE ON b
> + BEFORE INSERT OR DELETE OR UPDATE ON a
> FOR EACH ROW
> EXECUTE PROCEDURE i_trg();

> You see, object names are the same, but ordering is mixed.

Yeah, because the sort is just on object name.

For objects of the same type I suppose we could take a look at their
owning object's name too ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Simplify the definitions of a couple of system views by using

Log Message:
-----------
Simplify the definitions of a couple of system views by using SELECT *
instead of listing all the columns returned by the underlying function.

initdb not forced since this patch doesn't actually change anything about
the stored form of the views. It just means there's one less place to change
if someone wants to add columns to them.

Modified Files:
--------------
pgsql/src/backend/catalog:
system_views.sql (r1.54 -> r1.55)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/catalog/system_views.sql?r1=1.54&r2=1.55)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [JDBC] PreparedStatement.toString() creates valid SQL; was: Q: use setObject also for int or string

*** SimpleQuery.java 2008-09-21 21:15:55.000000000 +0200
--- /home/me/JDBC/postgresql-jdbc-8.3-603.src/org/postgresql/core/v3/SimpleQuery.java.orig 2008-09-21 18:51:55.000000000 +0200
*************** class SimpleQuery implements V3Query {
*** 35,66 ****

public String toString(ParameterList parameters) {
StringBuffer sbuf = new StringBuffer(fragments[0]);
- int[] typeOIDs = (parameters == null) ? null : parameters.getTypeOIDs();
for (int i = 1; i < fragments.length; ++i)
{
if (parameters == null)
sbuf.append('?');
! else {
! String s = parameters.toString(i);
! if("NULL".equals(s) ||
! typeOIDs[i-1] == Oid.INT2 || typeOIDs[i-1] == Oid.INT4 || typeOIDs[i-1] == Oid.INT8 ||
! typeOIDs[i-1] == Oid.NUMERIC || typeOIDs[i-1] == Oid.FLOAT4 || typeOIDs[i-1] == Oid.FLOAT8 ||
! typeOIDs[i-1] == Oid.OID) {
! sbuf.append(s);
! } else {
! StringBuffer p = null;
! try {
! // last boolean false is: standardConformingStrings
! // if true: one backslash is output as one backslash
! // if false: one backslash is output as two backslashes
! p = Utils.appendEscapedLiteral(null, s, false);
! if(p.indexOf("\\") != -1) sbuf.append(" E");
! } catch(java.sql.SQLException sqle) {
! p = new StringBuffer(sqle.toString());
! }
! sbuf.append("'" + p + "'");
! }
! }
sbuf.append(fragments[i]);
}
return sbuf.toString();
--- 35,46 ----

public String toString(ParameterList parameters) {
StringBuffer sbuf = new StringBuffer(fragments[0]);
for (int i = 1; i < fragments.length; ++i)
{
if (parameters == null)
sbuf.append('?');
! else
! sbuf.append(parameters.toString(i));
sbuf.append(fragments[i]);
}
return sbuf.toString();
*** V2Query.java 2008-09-21 21:17:34.000000000 +0200
--- /home/me/JDBC/postgresql-jdbc-8.3-603.src/org/postgresql/core/v2/V2Query.java.orig 2008-09-21 18:52:19.000000000 +0200
*************** class V2Query implements Query {
*** 86,117 ****

public String toString(ParameterList parameters) {
StringBuffer sbuf = new StringBuffer(fragments[0]);
- int[] typeOIDs = (parameters == null) ? null : parameters.getTypeOIDs();
for (int i = 1; i < fragments.length; ++i)
{
if (parameters == null)
sbuf.append("?");
! else {
! String s = parameters.toString(i);
! if("NULL".equals(s) ||
! typeOIDs[i-1] == Oid.INT2 || typeOIDs[i-1] == Oid.INT4 || typeOIDs[i-1] == Oid.INT8 ||
! typeOIDs[i-1] == Oid.NUMERIC || typeOIDs[i-1] == Oid.FLOAT4 || typeOIDs[i-1] == Oid.FLOAT8 ||
! typeOIDs[i-1] == Oid.OID) {
! sbuf.append(s);
! } else {
! StringBuffer p = null;
! try {
! // last boolean false is: standardConformingStrings
! // if true: one backslash is output as one backslash
! // if false: one backslash is output as two backslashes
! p = Utils.appendEscapedLiteral(null, s, false);
! if(p.indexOf("\\") != -1) sbuf.append(" E");
! } catch(java.sql.SQLException sqle) {
! p = new StringBuffer(sqle.toString());
! }
! sbuf.append("'" + p + "'");
! }
! }
sbuf.append(fragments[i]);
}
return sbuf.toString();
--- 86,97 ----

public String toString(ParameterList parameters) {
StringBuffer sbuf = new StringBuffer(fragments[0]);
for (int i = 1; i < fragments.length; ++i)
{
if (parameters == null)
sbuf.append("?");
! else
! sbuf.append(parameters.toString(i));
sbuf.append(fragments[i]);
}
return sbuf.toString();
Hi Kris,

in my Java code I need to retrieve the SQL
from a PreparedStatement to create from this another statement.

I did this and it would be great if you could have a look
into the attached patch files and make comments.

It has to set standard_conforming_strings to false
in any case to create valid sql:
If I use setString(1, "a\\b");
the real stored characters are 'a','\','b'.
If I would use standard_conforming_strings=true
the output would be "a\b"
but with set to false the output is "a\\b" which is the correct one.

I did not understand what you mean with InputStream.

Michael

Kris Jurka schrieb:
> Michael Enke wrote:
>> The PreparedStatement.toString() returns the query,
>> replaced with available parameter, but string constants are not
>> enclosed in apostrophes:
>>
>> 74: select * from a where a='a'
>> 83: select * from a where a=a
>>
>> The 74 version put this into apostrophes if setString was used.
>> If setInt was used, no apostrophes were output.
>>
>
> We've never claimed that the output of PreparedStatement.toString would
> produce valid SQL. There are certainly other problems with it than just
> missing apostrophes. It currently doesn't escape values, so it breaks
> if your data contains embedded ' or \. It doesn't understand whether
> the server has standard_conforming_strings enabled or not. What would
> it to do with a parameter that is an InputStream? If the toString code
> reads it, the driver cannot re-read it to send it to the server when
> executed.
>
> The InputStream is perhaps an unusual case, so I wouldn't be opposed if
> someone wanted to make the simple cases work better, but it's not
> something I'm particularly excited about.
>
> Kris Jurka


--
WINCOR NIXDORF International GmbH
Sitz der Gesellschaft: Paderborn
Registergericht Paderborn HRB 3507
Geschäftsführer: Eckard Heidloff (Vorsitzender), Stefan Auerbach, Dr. Jürgen Wunram
Vorsitzender des Aufsichtsrats: Karl-Heinz Stiller
Steuernummer: 339/5884/0020 - Ust-ID Nr.: DE812927716 - WEEE-Reg.-Nr. DE44477193

Diese E-Mail enthält vertrauliche Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

This e-mail may contain confidential information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden.

[HACKERS] parallel pg_restore

I am working on getting parallel pg_restore working. I'm currently
getting all the scaffolding working, and hope to have a naive prototype
posted within about a week.

The major question is how to choose the restoration order so as to
maximize efficiency both on the server and in reading the archive. My
thoughts are currently running something like this:

* when an item is completed, reduce the dependency count for each
item that depends on it by 1.
* when an item has a dependency count of 0 it is available for
execution, and gets moved to the head of the queue.
* when a new worker spot becomes available, if there not currently a
data load running then pick the first available data load,
otherwise pick the first available item.

This would mean that loading a table would probably be immediately
followed by creation of its indexes, including PK and UNIQUE
constraints, thus taking possible advantage of synchronised scans, data
in file system buffers, etc.

Another question is what we should do if the user supplies an explicit
order with --use-list. I'm inclined to say we should stick strictly with
the supplied order. Or maybe that should be an option.

Thoughts and comments welcome.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Predictable order of SQL commands in pg_dump

Unfortunately, I cannot reproduce this with 100% effect.

But, time to time I execute diff utility for a database and notice
that two or more trigger or constraint definitions (or something else)
are permuted. Something like this:


+ALTER TABLE ONLY a
+ ADD CONSTRAINT "fk_b_Id" FOREIGN KEY (b_id) REFERENCES b(id) MATCH FULL;

-ALTER TABLE ONLY a
- ADD CONSTRAINT fk_b_id FOREIGN KEY (b_id) REFERENCES b(id) MATCH FULL;

-ALTER TABLE ONLY a
+ALTER TABLE ONLY a
ADD CONSTRAINT fk_c_id FOREIGN KEY (c_id) REFERENCES c(id) MATCH FULL;


Or that:


CREATE TRIGGER t000_set_id
- BEFORE INSERT OR DELETE OR UPDATE ON a
+ BEFORE INSERT OR DELETE OR UPDATE ON b
FOR EACH ROW
EXECUTE PROCEDURE i_trg();

CREATE TRIGGER t000_set_id
- BEFORE INSERT OR DELETE OR UPDATE ON b
+ BEFORE INSERT OR DELETE OR UPDATE ON a
FOR EACH ROW
EXECUTE PROCEDURE i_trg();

You see, object names are the same, but ordering is mixed. Seems
pg_dump orders objects with no care about their dependencies? So, if
object names are the same, it dumps it in unpredictable order, no
matter on their contents...

On Sun, Sep 21, 2008 at 5:28 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Dmitry Koterov" <dmitry@koterov.ru> writes:
>> Utility pg_dump dumps the identical database schemas not always
>> identically: sometimes it changes an order of SQL statements.
>
> Please provide a concrete example. The dump order for modern servers
> (ie, since 7.3) is by object type, and within a type by object name,
> except where another order is forced by dependencies. And there is no
> random component to the dependency solver ;-). So it should be
> behaving the way you want.
>
> regards, tom lane
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Foreign key constraint for array-field?

On Sun, Sep 21, 2008 at 10:49:56PM +0400, Dmitry Koterov wrote:
> Normalization is not a panacea here. Sometimes such normalization
> creates too much overeat and a lot of additional code (especially if
> there are a lot of such dependencies). Array support in Postgres is
> quite handy; in my practive, moving from a_b_map to arrays
> economizes hundreds of lines of stored procedure and calling
> application code.

There are plenty of ways to "economize," as you put it. The burden is
on you to demonstrate that you are doing the right thing here because
standard database practice hammered out over decades is to normalize.

It's possible to make writeable VIEWs that accomplish what you appear
to want, but there's no reason to go further than that on the
PostgreSQL side. :)

> Triggers are not very helpful here, because it is too boringly to
> control that all needed tables has appropriate triggers (we need N +
> 1 triggers with unique code, where N is the number of referring
> tables).
>
> So, built-in support looks much more interesting...

I strongly suspect you'd benefit a lot more by learning database best
practices rather than assuming, as you appear to be doing, that you
are dealing with a new field and that you know it best. Neither is
true.

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Foreign key constraint for array-field?

Simon Riggs wrote:
> No, its not possible. Need a trigger.
>
> I think we should support it though. If we extend the relational model
> with arrays then it would be sensible if we support this aspect as
> well.
>
> Implementation would be fairly straightforward. ri_triggers currently
> assumes a non-array value is being checked, but that could be changed to
> IN(array). Multi-column keys with arrays sound confusing though.
>
>

What's the syntax going to look like?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Foreign key constraint for array-field?

Normalization is not a panacea here. Sometimes such normalization
creates too much overeat and a lot of additional code (especially if
there are a lot of such dependencies). Array support in Postgres is
quite handy; in my practive, moving from a_b_map to arrays economizes
hundreds of lines of stored procedure and calling application code.

Triggers are not very helpful here, because it is too boringly to
control that all needed tables has appropriate triggers (we need N + 1
triggers with unique code, where N is the number of referring tables).

So, built-in support looks much more interesting...

On Sun, Sep 21, 2008 at 8:46 AM, Joshua D. Drake <jd@commandprompt.com> wrote:
> David Fetter wrote:
>>
>> On Sun, Sep 21, 2008 at 04:38:56AM +0400, Dmitry Koterov wrote:
>>>
>>> Hello.
>>>
>>> Is it possible to create a foreign key constraint for ALL elements of
>>> an array field?
>>
>> Whether it's possible or not--it probably is--it's a very bad idea.
>> Just normalize :)
>
> +1
>
>>
>> Cheers,
>> David.
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] pg_settings.sourcefile patch is a security breach

Tom Lane wrote:
> We go to some lengths to prevent non-superusers from examining
> data_directory and other values that would tell them exactly where the
> PG data directory is in the server's filesystem. The recently applied
> patch to expose full pathnames of GUC variables' source files blows a
> hole a mile wide in that.
>
> Possible answers: don't show the path, only the file name; or
> show sourcefile/sourceline as NULL to non-superusers.

My vote goes for showing it as NULL to non-superusers. If we remove the
path, that makes it pretty darn useless for admin tools - which was the
main reason it was added in the first place..

And "showing full path for superuser, just filename for non-superusers"
just seems to be way too ugly to consider :-)

//Magnus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Obtener los 3 valores mas altos por grupo de una serie de registros

On Sat, Sep 20, 2008 at 6:35 PM, Alejandro D. Burne
<alejandro.dburne@gmail.com> wrote:
> Alguien tiene idea de como armar una consulta sql (si es que se puede)
> para obtener los 3 valores máximos y agrupandolos por una determinada
> columna

si aun estas interesado, prueba con esta:

select * from test3 as foo
where (c1, c2) in (select * from test3 where c1 = foo.c1 order by 1,
2 desc limit 3)
order by 1, 2 desc


--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157
--
TIP 4: No hagas 'kill -9' a postmaster

[HACKERS] pg_settings.sourcefile patch is a security breach

We go to some lengths to prevent non-superusers from examining
data_directory and other values that would tell them exactly where the
PG data directory is in the server's filesystem. The recently applied
patch to expose full pathnames of GUC variables' source files blows a
hole a mile wide in that.

Possible answers: don't show the path, only the file name; or
show sourcefile/sourceline as NULL to non-superusers.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Obtener los 3 valores mas altos por grupo de una serie de registros

El día 21 de septiembre de 2008 14:20, Oswaldo Hernández
<listas@soft-com.es> escribió:
> Raúl Andrés Duque Murillo escribió:
>>>
>>
>> Alejandro, la solución que planteas funciona siempre y cuando NO EXISTAN
>> "c2" repetidos. Para probarlo llena la tabla con los siguientes datos:
>
> La consulta esta hecha para que haga precisamente eso, mostrar los 3 valores
> mas altos ignorando las repeticiones.
>
>
> Saludos,
> --
> *****************************************
> Oswaldo Hernández
> oswaldo (@) soft-com (.) es
> *****************************************
> PD:
> Antes de imprimir este mensaje, asegúrese de que es necesario.
> El medio ambiente está en nuestra mano.
>

Antes que nada gracias a ambos por el tiempo dedicado; Oswaldo debo
considerar valores duplicados si existen, es decir, tengo que obtener
los 3 más altos independientemente estén o no repetidos. El uso de
"offset" no lo tenía previsto y es precisamente una solución muy
"elegante" a mis necesidades, vamos a ver como se comporta la consulta
(si los tiempos son aceptables).

Gracias nuevamente y saludos
Alejandro
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl

dpage@pgadmin.org writes:
> pgadmin has some umm, interesting queries over pg_depends. It sounds
> like this change could complicate those. I doubt it's an
> insurmountable problem of course.

Yeah. But the only real point of the change is cleanliness, and if it's
injecting ugliness into clients then that argument loses a lot of its
force. I looked at pg_dump a bit and it definitely would get uglier
--- not really more complicated, but defaults would get handled a bit
differently from everything else. Seems like a fertile source of bugs.
So maybe we'd better forget the idea :-(

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-www] Who is updating the PostgreSQL home page?

WWW team,

Has someone taken charge of updating the PostgreSQL home page for the
upcoming update release?

Also -- please do not approve the news item I'm about to submit until
9am EDT Monday.

--Josh

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl

pgadmin has some umm, interesting queries over pg_depends. It sounds
like this change could complicate those. I doubt it's an
insurmountable problem of course.

On 9/21/08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
>> Tom Lane wrote:
>>> A possible objection to this plan is that if the column-level privileges
>>> patch doesn't get in, then we're left with a useless column in
>>> pg_attribute. But an always-null column doesn't cost much of anything,
>>> and we know that sooner or later we will support per-column ACLs:
>>> they are clearly useful as well as required by spec. So any
>>> inefficiency would only be transient anyway.
>
>> Right. I don't see this objection holding much water as column privs are
>> something that many in the community would like to see. If Stephen's
>> patch doesn't get in, it is likely it would (or a derivative there of)
>> within the 8.5 release cycle. If anything it just provides a stepping
>> stone. I see nothing wrong with that.
>
> Yah. However, I started to look at doing this and immediately hit a
> stumbling block: we need a representation in pg_depend for a column's
> default expression (as distinct from the column itself). Currently
> this consists of classid = OID of pg_attrdef, objid = OID of the
> default's row in pg_attrdef; both of which would disappear if we
> get rid of pg_attrdef as an actual catalog.
>
> I can think of a way around that: represent a default expression using
> classid = OID of pg_attribute, objid = OID of table, objsubid = column
> attnum. This is distinct from the column itself, which is represented
> with classid = OID of pg_class. It seems pretty ugly and potentially
> confusing though. Also there would be a compatibility issue for clients
> that examine pg_depend. Is it ugly enough to scuttle the whole concept
> of merging pg_attrdef into pg_attribute?
>
> regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] sf.net updated

Devrim GÜNDÜZ wrote:
> On Sat, 2008-09-20 at 11:23 -0700, Joshua D. Drake wrote:
>> Thanks! What about Freshmeat?
>
> I don't have access to our freshmeat page.

I'll update Freshmeat tommorrow.

--Josh


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl

"Joshua D. Drake" <jd@commandprompt.com> writes:
> Tom Lane wrote:
>> A possible objection to this plan is that if the column-level privileges
>> patch doesn't get in, then we're left with a useless column in
>> pg_attribute. But an always-null column doesn't cost much of anything,
>> and we know that sooner or later we will support per-column ACLs:
>> they are clearly useful as well as required by spec. So any
>> inefficiency would only be transient anyway.

> Right. I don't see this objection holding much water as column privs are
> something that many in the community would like to see. If Stephen's
> patch doesn't get in, it is likely it would (or a derivative there of)
> within the 8.5 release cycle. If anything it just provides a stepping
> stone. I see nothing wrong with that.

Yah. However, I started to look at doing this and immediately hit a
stumbling block: we need a representation in pg_depend for a column's
default expression (as distinct from the column itself). Currently
this consists of classid = OID of pg_attrdef, objid = OID of the
default's row in pg_attrdef; both of which would disappear if we
get rid of pg_attrdef as an actual catalog.

I can think of a way around that: represent a default expression using
classid = OID of pg_attribute, objid = OID of table, objsubid = column
attnum. This is distinct from the column itself, which is represented
with classid = OID of pg_class. It seems pretty ugly and potentially
confusing though. Also there would be a compatibility issue for clients
that examine pg_depend. Is it ugly enough to scuttle the whole concept
of merging pg_attrdef into pg_attribute?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] fix dblink security hole

Marko Kreen wrote:
> On 9/21/08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Joe Conway <mail@joeconway.com> writes:
>>> Good point -- I'll look into that and post something tomorrow. How does
>> > "requirepassword" sound for the option? It is consistent with
>> > "requiressl" but a bit long and hard to read. Maybe "require_password"?
>>
>>
>> Well, no, because it's not requiring a password.
>>
>> Perhaps "ignore_pgpass"?
>
> You need to ignore pg_service also. (And PGPASSWORD)

Why? pg_service does not appear to support wildcards, so what is the
attack vector?

And on PGPASSWORD, the fine manual says the following:

PGPASSWORD sets the password used if the server demands password
authentication. Use of this environment variable is not recommended
for security reasons (some operating systems allow non-root users to
see process environment variables via ps); instead consider using the
~/.pgpass file (see Section 30.13).

At the moment the only real issue I can see is .pgpass when wildcards
are used for hostname:port:database.

Joe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl

Tom Lane wrote:

>
> A possible objection to this plan is that if the column-level privileges
> patch doesn't get in, then we're left with a useless column in
> pg_attribute. But an always-null column doesn't cost much of anything,
> and we know that sooner or later we will support per-column ACLs:
> they are clearly useful as well as required by spec. So any
> inefficiency would only be transient anyway.

Right. I don't see this objection holding much water as column privs are
something that many in the community would like to see. If Stephen's
patch doesn't get in, it is likely it would (or a derivative there of)
within the 8.5 release cycle. If anything it just provides a stepping
stone. I see nothing wrong with that.

>
> Thoughts, objections?
>

+1

Sincerely,

Joshua D. Drake


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Obtener los 3 valores mas altos por grupo de una serie de registros

Raúl Andrés Duque Murillo escribió:
>>
>
> Alejandro, la solución que planteas funciona siempre y cuando NO EXISTAN
> "c2" repetidos. Para probarlo llena la tabla con los siguientes datos:

La consulta esta hecha para que haga precisamente eso, mostrar los 3 valores mas altos ignorando las
repeticiones.


Saludos,
--
*****************************************
Oswaldo Hernández
oswaldo (@) soft-com (.) es
*****************************************
PD:
Antes de imprimir este mensaje, asegúrese de que es necesario.
El medio ambiente está en nuestra mano.
--
TIP 4: No hagas 'kill -9' a postmaster

Re: [HACKERS] Assert Levels

Peter Eisentraut <peter_e@gmx.net> writes:
> Simon Riggs wrote:
>> Well, we don't. That's why I'd suggest to do it slowly and classify
>> everything as medium weight until proven otherwise.

> Once you have classified all asserts, what do we do with the result?
> What would be the practical impact? What would be your recommendation
> about who runs with what setting?

Being able to keep asserts on while doing performance stress testing
was the suggested use case. I think we'd still recommend having them
off in production.

FWIW, my gut feeling about it is that 99% of the asserts in the backend
are lightweight, ie, have no meaningful effect on performance. There
are a small number that are expensive (the tests on validity of List
structures come to mind, as well as what we already discussed). I don't
have any more evidence for this than Simon has for his "they're mostly
medium-weight" assumption, but I'd point out that by definition most of
the backend code isn't performance-critical. So I think that an option
to turn off a few particularly expensive asserts would be sufficient.
Moreover, the more asserts you turn off, the less useful it would be to
do testing of this type. I see essentially no value in a mode that
turns off the majority of assertions.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] Proposal: move column defaults into pg_attribute along with attacl

I had a thought while looking over the column-level privileges patch
that Stephen Frost is working on. To wit, that the only reason that
column default expressions are stored in a separate catalog pg_attrdef
is the historical assumption in some parts of the code that pg_attribute
rows are fixed-width and all-not-null. This assumption is destroyed
by adding an attacl column, and so the patch has already done the
legwork to get rid of the assumption. Given that, it would be a lot
cleaner and more efficient to get rid of pg_attrdef and store column
default expressions in a new, possibly-null column
pg_attribute.attdefault.

For backwards compatibility for clients, we could create a system view
replacing pg_attrdef, but the backend wouldn't use it any more. Also,
although the atthasdef column is redundant with checking for non-null
attdefault, we should keep it: not only for compatibility, but because
it would be accessible in the fixed-width subset of pg_attribute rows
that will be kept in tuple descriptors, and so it could save a syscache
lookup in some places.

If that idea seems sane to people, what I would like to do is grab the
parts of the column-level privileges patch that relate to allowing
pg_attribute to contain null columns, and apply a patch that gets rid of
pg_attrdef and adds two columns attdefault and attacl to pg_attribute.
For the moment attacl would remain unused and always null. This would
eliminate a fair amount of the nuisance diffs that Stephen is currently
carrying and allow him to focus on the mechanics of doing something
useful with per-column ACLs. Adding both columns at the same time
should eliminate most of the merge problem he'd otherwise have from
needing to touch pg_attribute.h and pg_class.h again.

A possible objection to this plan is that if the column-level privileges
patch doesn't get in, then we're left with a useless column in
pg_attribute. But an always-null column doesn't cost much of anything,
and we know that sooner or later we will support per-column ACLs:
they are clearly useful as well as required by spec. So any
inefficiency would only be transient anyway.

Thoughts, objections?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Assert Levels

Simon Riggs wrote:
> Well, we don't. That's why I'd suggest to do it slowly and classify
> everything as medium weight until proven otherwise.

Once you have classified all asserts, what do we do with the result?
What would be the practical impact? What would be your recommendation
about who runs with what setting?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Toasted table not deleted when no out of line columns left

Zoltan Boszormenyi <zb@cybertec.at> writes:
> we came across a database where a table had a toasted table,
> keeping huge amounts of disk space allocated. However,
> the table's current definition didn't explain why there was
> a toasted table. Then upon some experiments, it struck me.
> There _was_ a toasted field but as the schema was modified,
> the fields was dropped, leaving only inline stored fields.
> VACUUM [FULL] [ANALYZE] didn't cleaned up the space
> that was used by the toasted table. My tests were done on 8.3.3.

This is not a bug; it is operating as designed. Observe the statement
in the NOTES section of the ALTER TABLE page:

The DROP COLUMN form does not physically remove the column, but
simply makes it invisible to SQL operations. Subsequent insert and
update operations in the table will store a null value for the
column. Thus, dropping a column is quick but it will not immediately
reduce the on-disk size of your table, as the space occupied by the
dropped column is not reclaimed. The space will be reclaimed over
time as existing rows are updated.

... and it goes on to point out how to force immediate space reclamation
if you need that. These statements apply independently of whether any
particular value is toasted or not.

The reason for this choice is that reclaiming the space immediately
would turn DROP COLUMN from a quick operation into a slow one, as it
would have to grovel over every row of the table looking for TOAST
pointers.

> Judging from that, the toasted table
> cleanup may be part of ALTER TABLE DROP COLUMN.

That would only help if you were dropping the last potentially-toastable
column of a table. And implementing it would require introducing weird
corner cases into the tuple toaster, because it might now come across
TOAST pointers that point to a no-longer-existent table, and have to
consider that to be a no-op instead of an error condition.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers