Tuesday, July 15, 2008

Re: [HACKERS] rfc: add pg_dump options to dump output

Is this a TODO item?

---------------------------------------------------------------------------

Greg Smith wrote:
> On Tue, 3 Jun 2008, Tom Lane wrote:
>
> > Well, the stuff included into the dump by pg_dump -v is informative,
> > too. But we stopped doing that by default because of complaints.
> > I remain unconvinced that this proposal won't suffer the same fate.
>
> I think it would be reasonable to only include the list of options used in
> the dump if you use one that changes what appears in the dump. That way,
> you wouldn't see anything by default. But if you make a modification that
> will likely break a diff with an existing dump done with the default
> parameters, the option change that introduced that should show at the very
> beginning.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] How to GROUP results BY month

On Tue, 2008-07-15 at 14:31 +0200, A. Kretschmer wrote:
> am Tue, dem 15.07.2008, um 13:12:39 +0100 mailte Oliveiros Cristina folgendes:
> > Howdy, all,
> >
> > I have a problem.
> >
> > I have a table which one of the fields is of type date.
> >
> > I need to obtain the totals of the other fields in a by-month basis
> > IS there any easy way to do this using the GROUP BY or any other construct?
>
> ... group by extract(month from date)
>
>
> Andreas

It's worth noting that extract(month from timestamp) returns a month_no, and thus will not be suitable for grouping queries that span years.

I recommend group by date_trunc('month', <<timestamp field>>)

-Mark


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [GENERAL] how to found a variable is in a aggregation or not?

yes!!!!

It's better obviously,

thanks:D

Yi
On Tue, 2008-07-15 at 15:46 +0200, Pavel Stehule wrote:
> so this code is little bit ugly
>
> you can write faster code
>
> create or replace function anytest(val text)
> returns boolean as $$
> begin
> return val in ('hello', 'world','test');
> end;
> $$ language plpgsql immutable strict;
>
> Pavel
>
> 2008/7/15 Yi Zhao <yi.zhao@alibaba-inc.com>:
> > I want to check a variable is in a aggregattion or not, so I create a
> > function as below:
> >
> > create or replace function anytest(val text) returns boolean as $$
> > begin
> > perform 1 where quote_literal(val) in ('hello', 'world', 'test');
> > if not found then
> > return false;
> > else
> > return true;
> > end if;
> > end;
> > $$ language plpgsql;
> >
> > but when I used, I got the result below, why?
> >
> > test=# select anytest('world111');
> > anytest
> > ---------
> > f
> > (1 row)
> >
> > test=# select anytest('world');
> > anytest
> > ---------
> > f
> > (1 row)
> >
> >
> > any help is appreciated.
> >
> > regards,
> > Zy
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgus-board] [Fwd: US PostreSQL]

On Mon, 2008-07-14 at 10:57 -0400, Michael Alan Brewer wrote:
> I've modified the two .DOC files to include the revised terms for the
> initial board members (standing for re-election in '09, taking office
> in '10 [instead of '10/'11, respectively]) and the correct membership
> fees (and to correct one typo of a misspelled two-letter word and
> adding "of" before "financial institution [???] Oregon"). I did not
> define "student" in the document, figuring that we can do that via
> separate motion (and not necessarily be encoded specifically in the
> bylaws at this point). Let me know if I missed anything.


Selena are you a go on these?

>
> ---Michael Brewer
> mbrewer@gmail.com
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/

PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [pgus-board] [Fwd: US PostreSQL]

On Tue, 2008-07-15 at 00:43 -0400, Michael Alan Brewer wrote:
> On Mon, Jul 14, 2008 at 8:57 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
> >
> > I think they look great and I approve, +1, now request that we get
> > consent unless someone has objections. I would like everyone to print,
> > sign (at notary) and return!
>
> Do we need to sign the same one, or can we submit different ones?

Hmm good question.

>
> ---Michael Brewer
> mbrewer@gmail.com
>
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/

PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [PATCHES] array_fill function

Index: doc/src/sgml/func.sgml
===================================================================
RCS file: /cvsroot/pgsql/doc/src/sgml/func.sgml,v
retrieving revision 1.440
diff -c -c -r1.440 func.sgml
*** doc/src/sgml/func.sgml 15 Jul 2008 18:24:59 -0000 1.440
--- doc/src/sgml/func.sgml 16 Jul 2008 00:42:25 -0000
***************
*** 9374,9379 ****
--- 9374,9392 ----
<row>
<entry>
<literal>
+ <function>array_fill</function>(<type>anyelement</type>, <type>anyarray</type>,
+ <optional>, <type>anyarray</type></optional>)
+ </literal>
+ </entry>
+ <entry><type>anyarray</type></entry>
+ <entry>returns an array initialized with supplied value,
+ dimensions, and lower bounds</entry>
+ <entry><literal>array_fill(7, ARRAY[3], ARRAY[2])</literal></entry>
+ <entry><literal>[2:4]={7,7,7}</literal></entry>
+ </row>
+ <row>
+ <entry>
+ <literal>
<function>array_lower</function>(<type>anyarray</type>, <type>int</type>)
</literal>
</entry>
Index: src/backend/utils/adt/arrayfuncs.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/adt/arrayfuncs.c,v
retrieving revision 1.145
diff -c -c -r1.145 arrayfuncs.c
*** src/backend/utils/adt/arrayfuncs.c 12 May 2008 00:00:51 -0000 1.145
--- src/backend/utils/adt/arrayfuncs.c 16 Jul 2008 00:42:26 -0000
***************
*** 95,100 ****
--- 95,105 ----
int *st, int *endp,
int typlen, bool typbyval, char typalign);
static int array_cmp(FunctionCallInfo fcinfo);
+ static ArrayType *create_array_envelope(int ndims, int *dimv, int *lbv, int nbytes,
+ Oid elmtype, int dataoffset);
+ static ArrayType *array_fill_internal(ArrayType *dims, ArrayType *lbs, Datum value,
+ Oid elmtype, bool isnull,
+ FunctionCallInfo fcinfo);


/*
***************
*** 4314,4316 ****
--- 4319,4590 ----
/* just call the other one -- it can handle both cases */
return generate_subscripts(fcinfo);
}
+
+ /*
+ * array_fill_with_lower_bounds
+ * Create and fill array with defined lower bounds.
+ */
+ Datum
+ array_fill_with_lower_bounds(PG_FUNCTION_ARGS)
+ {
+ ArrayType *dims;
+ ArrayType *lbs;
+ ArrayType *result;
+ Oid elmtype;
+ Datum value;
+ bool isnull;
+
+ if (PG_ARGISNULL(1) || PG_ARGISNULL(2))
+ ereport(ERROR,
+ (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
+ errmsg("dimension array or low bound array cannot be NULL")));
+
+ dims = PG_GETARG_ARRAYTYPE_P(1);
+ lbs = PG_GETARG_ARRAYTYPE_P(2);
+
+ if (!PG_ARGISNULL(0))
+ {
+ value = PG_GETARG_DATUM(0);
+ isnull = false;
+ }
+ else
+ {
+ value = 0;
+ isnull = true;
+ }
+
+ elmtype = get_fn_expr_argtype(fcinfo->flinfo, 0);
+ if (!OidIsValid(elmtype))
+ elog(ERROR, "could not determine data type of input");
+
+ result = array_fill_internal(dims, lbs, value, elmtype, isnull, fcinfo);
+ PG_RETURN_ARRAYTYPE_P(result);
+ }
+
+ /*
+ * array_fill
+ * Create and fill array with default lower bounds.
+ */
+ Datum
+ array_fill(PG_FUNCTION_ARGS)
+ {
+ ArrayType *dims;
+ ArrayType *result;
+ Oid elmtype;
+ Datum value;
+ bool isnull;
+
+ if (PG_ARGISNULL(1))
+ ereport(ERROR,
+ (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
+ errmsg("dimension array or low bound array cannot be NULL")));
+
+ dims = PG_GETARG_ARRAYTYPE_P(1);
+
+ if (!PG_ARGISNULL(0))
+ {
+ value = PG_GETARG_DATUM(0);
+ isnull = false;
+ }
+ else
+ {
+ value = 0;
+ isnull = true;
+ }
+
+ elmtype = get_fn_expr_argtype(fcinfo->flinfo, 0);
+ if (!OidIsValid(elmtype))
+ elog(ERROR, "could not determine data type of input");
+
+ result = array_fill_internal(dims, NULL, value, elmtype, isnull, fcinfo);
+ PG_RETURN_ARRAYTYPE_P(result);
+ }
+
+ static ArrayType *
+ create_array_envelope(int ndims, int *dimv, int *lbsv, int nbytes,
+ Oid elmtype, int dataoffset)
+ {
+ ArrayType *result;
+
+ result = (ArrayType *) palloc0(nbytes);
+ SET_VARSIZE(result, nbytes);
+ result->ndim = ndims;
+ result->dataoffset = dataoffset;
+ result->elemtype = elmtype;
+ memcpy(ARR_DIMS(result), dimv, ndims * sizeof(int));
+ memcpy(ARR_LBOUND(result), lbsv, ndims * sizeof(int));
+
+ return result;
+ }
+
+ static ArrayType *
+ array_fill_internal(ArrayType *dims, ArrayType *lbs, Datum value,
+ Oid elmtype, bool isnull,
+ FunctionCallInfo fcinfo)
+ {
+ ArrayType *result;
+ int *dimv;
+ int *lbsv;
+ int ndims;
+ int nitems;
+ int deflbs[MAXDIM];
+ int16 elmlen;
+ bool elmbyval;
+ char elmalign;
+ ArrayMetaState *my_extra;
+
+ /*
+ * Params checks
+ */
+ if (ARR_NDIM(dims) != 1)
+ ereport(ERROR,
+ (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
+ errmsg("wrong number of array subscripts"),
+ errhint("Dimension array must be one dimensional.")));
+
+ if (ARR_LBOUND(dims)[0] != 1)
+ ereport(ERROR,
+ (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
+ errmsg("wrong range of array_subscripts"),
+ errhint("Lower bound of dimension array must be one.")));
+
+ if (ARR_HASNULL(dims))
+ ereport(ERROR,
+ (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
+ errmsg("dimension values cannot be null")));
+
+ dimv = (int *) ARR_DATA_PTR(dims);
+ ndims = ARR_DIMS(dims)[0];
+
+ if (ndims < 0) /* we do allow zero-dimension arrays */
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("invalid number of dimensions: %d", ndims)));
+ if (ndims > MAXDIM)
+ ereport(ERROR,
+ (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+ errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)",
+ ndims, MAXDIM)));
+
+ if (lbs != NULL)
+ {
+ if (ARR_NDIM(lbs) != 1)
+ ereport(ERROR,
+ (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
+ errmsg("wrong number of array subscripts"),
+ errhint("Dimension array must be one dimensional.")));
+
+ if (ARR_LBOUND(lbs)[0] != 1)
+ ereport(ERROR,
+ (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
+ errmsg("wrong range of array_subscripts"),
+ errhint("Lower bound of dimension array must be one.")));
+
+ if (ARR_HASNULL(lbs))
+ ereport(ERROR,
+ (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
+ errmsg("dimension values cannot be null")));
+
+ if (ARR_DIMS(lbs)[0] != ndims)
+ ereport(ERROR,
+ (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
+ errmsg("wrong number of array_subscripts"),
+ errhint("Low bound array has different size than dimensions array.")));
+
+ lbsv = (int *) ARR_DATA_PTR(lbs);
+ }
+ else
+ {
+ int i;
+
+ for (i = 0; i < MAXDIM; i++)
+ deflbs[i] = 1;
+
+ lbsv = deflbs;
+ }
+
+ /* fast track for empty array */
+ if (ndims == 0)
+ return construct_empty_array(elmtype);
+
+ nitems = ArrayGetNItems(ndims, dimv);
+
+
+ /*
+ * We arrange to look up info about element type only once per series of
+ * calls, assuming the element type doesn't change underneath us.
+ */
+ my_extra = (ArrayMetaState *) fcinfo->flinfo->fn_extra;
+ if (my_extra == NULL)
+ {
+ fcinfo->flinfo->fn_extra = MemoryContextAlloc(fcinfo->flinfo->fn_mcxt,
+ sizeof(ArrayMetaState));
+ my_extra = (ArrayMetaState *) fcinfo->flinfo->fn_extra;
+ my_extra->element_type = InvalidOid;
+ }
+
+ if (my_extra->element_type != elmtype)
+ {
+ /* Get info about element type */
+ get_typlenbyvalalign(elmtype,
+ &my_extra->typlen,
+ &my_extra->typbyval,
+ &my_extra->typalign);
+ my_extra->element_type = elmtype;
+ }
+
+ elmlen = my_extra->typlen;
+ elmbyval = my_extra->typbyval;
+ elmalign = my_extra->typalign;
+
+ /* compute required space */
+ if (!isnull)
+ {
+ int i;
+ char *p;
+ int nbytes;
+ Datum aux_value = value;
+
+ /* make sure data is not toasted */
+ if (elmlen == -1)
+ value = PointerGetDatum(PG_DETOAST_DATUM(value));
+
+ nbytes = att_addlength_datum(0, elmlen, value);
+ nbytes = att_align_nominal(nbytes, elmalign);
+
+ nbytes *= nitems;
+ /* check for overflow of total request */
+ if (!AllocSizeIsValid(nbytes))
+ ereport(ERROR,
+ (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+ errmsg("array size exceeds the maximum allowed (%d)",
+ (int) MaxAllocSize)));
+
+ nbytes += ARR_OVERHEAD_NONULLS(ndims);
+ result = create_array_envelope(ndims, dimv, lbsv, nbytes,
+ elmtype, 0);
+ p = ARR_DATA_PTR(result);
+ for (i = 0; i < nitems; i++)
+ p += ArrayCastAndSet(value, elmlen, elmbyval, elmalign, p);
+
+ /* cleaning up detoasted copies of datum */
+ if (aux_value != value)
+ pfree((Pointer) value);
+ }
+ else
+ {
+ int nbytes;
+ int dataoffset;
+ bits8 *bitmap;
+
+ dataoffset = ARR_OVERHEAD_WITHNULLS(ndims, nitems);
+ nbytes = dataoffset;
+
+ result = create_array_envelope(ndims, dimv, lbsv, nbytes,
+ elmtype, dataoffset);
+ bitmap = ARR_NULLBITMAP(result);
+ MemSet(bitmap, 0, (nitems + 7) / 8);
+ }
+
+ return result;
+ }
Index: src/include/catalog/catversion.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/catalog/catversion.h,v
retrieving revision 1.467
diff -c -c -r1.467 catversion.h
*** src/include/catalog/catversion.h 14 Jul 2008 00:51:45 -0000 1.467
--- src/include/catalog/catversion.h 16 Jul 2008 00:42:27 -0000
***************
*** 53,58 ****
*/

/* yyyymmddN */
! #define CATALOG_VERSION_NO 200807131

#endif
--- 53,58 ----
*/

/* yyyymmddN */
! #define CATALOG_VERSION_NO 200807151

#endif
Index: src/include/catalog/pg_proc.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/catalog/pg_proc.h,v
retrieving revision 1.505
diff -c -c -r1.505 pg_proc.h
*** src/include/catalog/pg_proc.h 14 Jul 2008 00:51:45 -0000 1.505
--- src/include/catalog/pg_proc.h 16 Jul 2008 00:42:27 -0000
***************
*** 1010,1017 ****
DESCR("array subscripts generator");
DATA(insert OID = 1192 ( generate_subscripts PGNSP PGUID 12 1 1000 f f t t i 2 23 "2277 23" _null_ _null_ _null_ generate_subscripts_nodir - _null_ _null_ ));
DESCR("array subscripts generator");
!
!
DATA(insert OID = 760 ( smgrin PGNSP PGUID 12 1 0 f f t f s 1 210 "2275" _null_ _null_ _null_ smgrin - _null_ _null_ ));
DESCR("I/O");
DATA(insert OID = 761 ( smgrout PGNSP PGUID 12 1 0 f f t f s 1 2275 "210" _null_ _null_ _null_ smgrout - _null_ _null_ ));
--- 1010,1019 ----
DESCR("array subscripts generator");
DATA(insert OID = 1192 ( generate_subscripts PGNSP PGUID 12 1 1000 f f t t i 2 23 "2277 23" _null_ _null_ _null_ generate_subscripts_nodir - _null_ _null_ ));
DESCR("array subscripts generator");
! DATA(insert OID = 1193 ( array_fill PGNSP PGUID 12 1 0 f f f f i 2 2277 "2283 1007" _null_ _null_ _null_ array_fill - _null_ _null_ ));
! DESCR("array constructor with value");
! DATA(insert OID = 1286 ( array_fill PGNSP PGUID 12 1 0 f f f f i 3 2277 "2283 1007 1007" _null_ _null_ _null_ array_fill_with_lower_bounds - _null_ _null_ ));
! DESCR("array constructor with value");
DATA(insert OID = 760 ( smgrin PGNSP PGUID 12 1 0 f f t f s 1 210 "2275" _null_ _null_ _null_ smgrin - _null_ _null_ ));
DESCR("I/O");
DATA(insert OID = 761 ( smgrout PGNSP PGUID 12 1 0 f f t f s 1 2275 "210" _null_ _null_ _null_ smgrout - _null_ _null_ ));
Index: src/include/utils/array.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/utils/array.h,v
retrieving revision 1.67
diff -c -c -r1.67 array.h
*** src/include/utils/array.h 28 Apr 2008 14:48:57 -0000 1.67
--- src/include/utils/array.h 16 Jul 2008 00:42:28 -0000
***************
*** 202,207 ****
--- 202,209 ----
extern Datum array_smaller(PG_FUNCTION_ARGS);
extern Datum generate_subscripts(PG_FUNCTION_ARGS);
extern Datum generate_subscripts_nodir(PG_FUNCTION_ARGS);
+ extern Datum array_fill(PG_FUNCTION_ARGS);
+ extern Datum array_fill_with_lower_bounds(PG_FUNCTION_ARGS);

extern Datum array_ref(ArrayType *array, int nSubscripts, int *indx,
int arraytyplen, int elmlen, bool elmbyval, char elmalign,
Index: src/test/regress/expected/arrays.out
===================================================================
RCS file: /cvsroot/pgsql/src/test/regress/expected/arrays.out,v
retrieving revision 1.36
diff -c -c -r1.36 arrays.out
*** src/test/regress/expected/arrays.out 28 Apr 2008 14:48:57 -0000 1.36
--- src/test/regress/expected/arrays.out 16 Jul 2008 00:42:28 -0000
***************
*** 933,935 ****
--- 933,993 ----

drop function unnest1(anyarray);
drop function unnest2(anyarray);
+ select array_fill(null::integer, array[3,3],array[2,2]);
+ array_fill
+ -----------------------------------------------------------------
+ [2:4][2:4]={{NULL,NULL,NULL},{NULL,NULL,NULL},{NULL,NULL,NULL}}
+ (1 row)
+
+ select array_fill(null::integer, array[3,3]);
+ array_fill
+ ------------------------------------------------------
+ {{NULL,NULL,NULL},{NULL,NULL,NULL},{NULL,NULL,NULL}}
+ (1 row)
+
+ select array_fill(null::text, array[3,3],array[2,2]);
+ array_fill
+ -----------------------------------------------------------------
+ [2:4][2:4]={{NULL,NULL,NULL},{NULL,NULL,NULL},{NULL,NULL,NULL}}
+ (1 row)
+
+ select array_fill(null::text, array[3,3]);
+ array_fill
+ ------------------------------------------------------
+ {{NULL,NULL,NULL},{NULL,NULL,NULL},{NULL,NULL,NULL}}
+ (1 row)
+
+ select array_fill(7, array[3,3],array[2,2]);
+ array_fill
+ --------------------------------------
+ [2:4][2:4]={{7,7,7},{7,7,7},{7,7,7}}
+ (1 row)
+
+ select array_fill(7, array[3,3]);
+ array_fill
+ ---------------------------
+ {{7,7,7},{7,7,7},{7,7,7}}
+ (1 row)
+
+ select array_fill('juhu'::text, array[3,3],array[2,2]);
+ array_fill
+ -----------------------------------------------------------------
+ [2:4][2:4]={{juhu,juhu,juhu},{juhu,juhu,juhu},{juhu,juhu,juhu}}
+ (1 row)
+
+ select array_fill('juhu'::text, array[3,3]);
+ array_fill
+ ------------------------------------------------------
+ {{juhu,juhu,juhu},{juhu,juhu,juhu},{juhu,juhu,juhu}}
+ (1 row)
+
+ -- raise exception
+ select array_fill(1, null, array[2,2]);
+ ERROR: dimension array or low bound array cannot be NULL
+ select array_fill(1, array[2,2], null);
+ ERROR: dimension array or low bound array cannot be NULL
+ select array_fill(1, array[3,3], array[1,1,1]);
+ ERROR: wrong number of array_subscripts
+ HINT: Low bound array has different size than dimensions array.
+ select array_fill(1, array[1,2,null]);
+ ERROR: dimension values cannot be null
Index: src/test/regress/sql/arrays.sql
===================================================================
RCS file: /cvsroot/pgsql/src/test/regress/sql/arrays.sql,v
retrieving revision 1.28
diff -c -c -r1.28 arrays.sql
*** src/test/regress/sql/arrays.sql 28 Apr 2008 14:48:58 -0000 1.28
--- src/test/regress/sql/arrays.sql 16 Jul 2008 00:42:28 -0000
***************
*** 357,359 ****
--- 357,373 ----

drop function unnest1(anyarray);
drop function unnest2(anyarray);
+
+ select array_fill(null::integer, array[3,3],array[2,2]);
+ select array_fill(null::integer, array[3,3]);
+ select array_fill(null::text, array[3,3],array[2,2]);
+ select array_fill(null::text, array[3,3]);
+ select array_fill(7, array[3,3],array[2,2]);
+ select array_fill(7, array[3,3]);
+ select array_fill('juhu'::text, array[3,3],array[2,2]);
+ select array_fill('juhu'::text, array[3,3]);
+ -- raise exception
+ select array_fill(1, null, array[2,2]);
+ select array_fill(1, array[2,2], null);
+ select array_fill(1, array[3,3], array[1,1,1]);
+ select array_fill(1, array[1,2,null]);
Patch applied, with minor adjustments in error message wording, with
documntation added; committed patch attached.

---------------------------------------------------------------------------

Pavel Stehule wrote:
> Hello
>
> Proposal: http://archives.postgresql.org/pgsql-hackers/2008-06/msg00057.php
>
> I changed name to array_fill and order of arguments.
>
> postgres=# SELECT array_fill(0, ARRAY[2,3]);
> array_fill
> -------------------
> {{0,0,0},{0,0,0}}
> (1 row)
>
> postgres=# SELECT array_fill(0, ARRAY[2,3], ARRAY[1,2]);
> array_fill
> ------------------------------
> [1:2][2:4]={{0,0,0},{0,0,0}}
> (1 row)
>
> postgres=# SELECT array_fill(0, ARRAY[4], ARRAY[2]);
> array_fill
> -----------------
> [2:5]={0,0,0,0}
> (1 row)
>
> postgres=# SELECT array_fill(NULL::int, ARRAY[4]);
> array_fill
> -----------------------
> {NULL,NULL,NULL,NULL}
> (1 row)
>
> Regards
> Pavel Stehule

[ Attachment, skipping... ]

>
> --
> Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-patches

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

[COMMITTERS] pgsql: Add array_fill() to create arrays initialized with a value.

Log Message:
-----------
Add array_fill() to create arrays initialized with a value.

Pavel Stehule

Modified Files:
--------------
pgsql/doc/src/sgml:
func.sgml (r1.440 -> r1.441)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/func.sgml?r1=1.440&r2=1.441)
pgsql/src/backend/utils/adt:
arrayfuncs.c (r1.145 -> r1.146)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/arrayfuncs.c?r1=1.145&r2=1.146)
pgsql/src/include/catalog:
catversion.h (r1.467 -> r1.468)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/catalog/catversion.h?r1=1.467&r2=1.468)
pg_proc.h (r1.505 -> r1.506)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/catalog/pg_proc.h?r1=1.505&r2=1.506)
pgsql/src/include/utils:
array.h (r1.67 -> r1.68)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/utils/array.h?r1=1.67&r2=1.68)
pgsql/src/test/regress/expected:
arrays.out (r1.36 -> r1.37)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/test/regress/expected/arrays.out?r1=1.36&r2=1.37)
pgsql/src/test/regress/sql:
arrays.sql (r1.28 -> r1.29)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/test/regress/sql/arrays.sql?r1=1.28&r2=1.29)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] Out of memry with large result set

On Mon, Jul 14, 2008 at 11:53 PM, olivier.scalbert@algosyn.com
<olivier.scalbert@algosyn.com> wrote:
>
>> Try copy (query) to stdout.
>>
>> For me, psql sits at 4.9mb ram on a 3x10^16 row query.
>>
>> klint.
>>
>
> Thanks Klint.
> Can I use a 'copy to' for a query ? I thought I can only do 'copy
> table to stdout'
> I will do some tests tomorrow and keep you inform.

In 8.2.9 this works:

copy (select * from mytable) to stdout;

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] PATCH: CITEXT 2.0 v3

On Jul 15, 2008, at 12:56, Tom Lane wrote:

> Don't run the tests in a read-only directory, perhaps.

Yes, I changed the owner to the postgres system user and that did the
trick.

>
>> Or do they matter for sanity-checking citext?
>
> Hard to tell --- I'd suggest trying to get a clean run. As for what
> you
> have, the first diff hunk suggests you've got the wrong function
> properties for citextsend/citextrecv.

Here's the new diff:

*** ./expected/opr_sanity.out Mon Jul 14 21:55:49 2008
--- ./results/opr_sanity.out Tue Jul 15 17:41:03 2008
***************
*** 87,94 ****
p1.provolatile != p2.provolatile OR
p1.pronargs != p2.pronargs);
oid | proname | oid | proname
! -----+---------+-----+---------
! (0 rows)

-- Look for uses of different type OIDs in the argument/result type
fields
-- for different aliases of the same built-in function.
--- 87,96 ----
p1.provolatile != p2.provolatile OR
p1.pronargs != p2.pronargs);
oid | proname | oid | proname
! ------+----------+-------+------------
! 2414 | textrecv | 87258 | citextrecv
! 2415 | textsend | 87259 | citextsend
! (2 rows)

-- Look for uses of different type OIDs in the argument/result type
fields
-- for different aliases of the same built-in function.
***************
*** 110,117 ****
prorettype | prorettype
------------+------------
25 | 1043
1114 | 1184
! (2 rows)

SELECT DISTINCT p1.proargtypes[0], p2.proargtypes[0]
FROM pg_proc AS p1, pg_proc AS p2
--- 112,120 ----
prorettype | prorettype
------------+------------
25 | 1043
+ 25 | 87255
1114 | 1184
! (3 rows)

SELECT DISTINCT p1.proargtypes[0], p2.proargtypes[0]
FROM pg_proc AS p1, pg_proc AS p2
***************
*** 124,133 ****
-------------+-------------
25 | 1042
25 | 1043
1114 | 1184
1560 | 1562
2277 | 2283
! (5 rows)

SELECT DISTINCT p1.proargtypes[1], p2.proargtypes[1]
FROM pg_proc AS p1, pg_proc AS p2
--- 127,138 ----
-------------+-------------
25 | 1042
25 | 1043
+ 25 | 87255
+ 1042 | 87255
1114 | 1184
1560 | 1562
2277 | 2283
! (7 rows)

SELECT DISTINCT p1.proargtypes[1], p2.proargtypes[1]
FROM pg_proc AS p1, pg_proc AS p2
***************
*** 139,148 ****
proargtypes | proargtypes
-------------+-------------
23 | 28
1114 | 1184
1560 | 1562
2277 | 2283
! (4 rows)

SELECT DISTINCT p1.proargtypes[2], p2.proargtypes[2]
FROM pg_proc AS p1, pg_proc AS p2
--- 144,154 ----
proargtypes | proargtypes
-------------+-------------
23 | 28
+ 25 | 87255
1114 | 1184
1560 | 1562
2277 | 2283
! (5 rows)

SELECT DISTINCT p1.proargtypes[2], p2.proargtypes[2]
FROM pg_proc AS p1, pg_proc AS p2
***************
*** 305,311 ****
142 | 25 | 0 | a
142 | 1043 | 0 | a
142 | 1042 | 0 | a
! (6 rows)

-- **************** pg_operator ****************
-- Look for illegal values in pg_operator fields.
--- 311,318 ----
142 | 25 | 0 | a
142 | 1043 | 0 | a
142 | 1042 | 0 | a
! 87255 | 1042 | 0 | a
! (7 rows)

-- **************** pg_operator ****************
-- Look for illegal values in pg_operator fields.

======================================================================

So I guess my question is: what is wrong with the properties for
citextsend/citextrecv and what else might these failures be indicating
is wrong?

CREATE OR REPLACE FUNCTION citextrecv(internal)
RETURNS citext
AS 'textrecv'
LANGUAGE 'internal' IMMUTABLE STRICT;

CREATE OR REPLACE FUNCTION citextsend(citext)
RETURNS bytea
AS 'textsend'
LANGUAGE 'internal' IMMUTABLE STRICT;

CREATE TYPE citext (
INPUT = citextin,
OUTPUT = citextout,
RECEIVE = citextrecv,
SEND = citextsend,
INTERNALLENGTH = VARIABLE,
STORAGE = extended
);

Thanks,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] [HACKERS] WITH RECURSIVE updated to CVS TIP

> On Tue, Jul 08, 2008 at 06:01:05PM +0900, Tatsuo Ishii wrote:
> > Here is the patches he made against CVS HEAD (as of today).
> >
> > According to him followings are fixed with the patches:
> >
> > - fix crush with DISTINCT
> > - fix creating VIEW
> > - fix the case when recursion plan has another recursion plan under it
> > - fix WITH RECURSIVE ...(..) SELECT ...WHERE.. returns wrong result
> > - fix inifinit recursion with OUTER JOIN
>
> Great!
>
> I've patched psql for some partial support of WITH [RECURSIVE].
>
> http://git.postgresql.org/?p=~davidfetter/postgresql/.git;a=commit;h=da63f9a82b9e902b5542f788b2e6e6bc95221793

Thanks. I will incorporate them with propsed patches.

> > Not yet fixed:
> >
> > - detect certain queries those are not valid acroding to the standard
> > - sort query names acording to the dependency
>
> Is there something in the standard on how to do this? How to sort the
> nodes other ways?

No idea. What do you think if we allow only one query name at the
moment. I guess most WITH RECURISVE use cases are enough with single
query name.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [pgsql-advocacy] [pdxpug] Pg booth staffing at OSCON

On Tuesday 15 July 2008 10:42:07 Daniel Johnson wrote:
> > Well, you can obviously get into OSCAMP and FOSSCoach and the BOFs and
> > the other free events. Not that I'd be promoting such a thing (as an
> > OSCON committee member), but if there's *a* specific session you want to
> > attend, you can probably persuade one of the several PostgreSQL speakers
> > to loan you their badge.
>
> No offense, but in the years that I have been going to OSCON I've
> never had anywhere near enough money to buy a real pass to OSCON.
> Every year it has been volunteering for a booth, or attending OSCAMP.
> Last year I was able to afford Ubuntu Live through one of the more
> extreme discount codes, but that is it. There are lots of people who
> participate as much as they can by way of volunteering, and I am one
> of them. We help make the conference happen so please treat us with
> respect even if we can't afford to pay are way in.
>

I can't imagine how you could have taken Josh's post to be anything but
courteous and respectful, but I do encourage you to join us at the BOF where
we can settle it once and for all.... sumo suits anyone?

http://www.maineventweb.com/page/page/2916926.htm

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

Re: [BUGS] BUG #4307: INSERT fails with primary key contraint

"Oskars Ozols" <oskars.ozols@gmail.com> writes:

> id bigint NOT NULL DEFAULT nextval(('public.event_log_id_seq'::text)::regclass),

> 2008-07-15 12:32:03 EEST STATEMENT: INSERT INTO public.event_log
> (date_time, ip_address, action_type, severity, parameters, web_address,
> server, user_id, id) VALUES ('2008-07-15 12:28:50.000000',
> '123.123.123.123', 'WebServices.SomeService:LogError', 70000, 'error text',
> 'http://123.123.123.123/WebServices/SomeService.asmx', '4', 75, 156112)

There's something strange here. Your SQL statement includes the id as a
literal constant 156112. This isn't the normal way to write this query. This
is defeating the point of the DEFAULT you see in the table definition.

Postgres guarantees that the nextval() function will only return each value
once. But it's not clear from this log how your application is generating the
156112 value which it is explicitly putting in the query. If it's getting it
by calling nextval() then it's somehow using it twice.

It's also possible someone has written code to pick primary key values by
calling "select max(id)+1". That is guaranteed to have race conditions like
this.

The safest thing to do is to just leave out the id column from your INSERT
statement. Just let the DEFAULT expression generate a value for you. Then you
can use curval('event_log_id_seq') to find out what value it generated.

--
Gregory Stark
EnterpriseDB

http://www.enterprisedb.com

Ask me about EnterpriseDB's RemoteDBA services!

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [BUGS] BUG #4307: INSERT fails with primary key contraint

"Oskars Ozols" <oskars.ozols@gmail.com> writes:
> I have noticed that during high load Postgre starts to use old free sequence
> values (like 156112 in my sample). Sometimes it's ok, but sometimes it fails
> with error above.

This is fairly hard to believe; the sequence code was debugged years
ago. Particularly seeing that your application is evidently supplying
the id value for itself in the INSERT (from a previous nextval, or perhaps
some other way?), it seems much more likely that there's a bug on the
application side.

If you'd like us to pursue this, please put together a self-contained
test case. Assuming it's real, perhaps a simple custom script for
pgbench would serve to show the problem.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[GENERAL] 8.3.3 regression test on SCO 5.0.7

Hi All,

I have been working through upgrades on legacy business systems on old Linux and SCO plateforms.
With the help of this list the Linux problems are handled, thanks.

The SCO build is not yet done.
I have managed to compile 8.3.3 with the help of the archives.
But there were lots of warnings and I would like to see the results of the regression tests.

I'm looking for help to make the regression tests happen.

Thanks Allan


make check at the top level fails with:

Running in noclean mode. Mistakes will not be cleaned up.
could not determine encoding for locale "C_C.C": codeset is ""
initdb: could not find suitable encoding for locale C_C.C
Rerun initdb with the -E option.
Try "initdb --help" for more information.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale C_C.C.

Moving to ......./postgresql-8.3.3/src/test/regress

and running

./pg_regress --temp-install=./tmp_check --top-builddir=../../.. --srcdir=/upgrades/postgres/postgresql-8.3.3/src/test/regress --temp-port=55432 --schedule=./parallel_schedule --multibyte=SQL_ASCII --load-language=plpgsql --no-locale

( note the added --no-locale )

gets me further, but fails with:

Running in noclean mode. Mistakes will not be cleaned up.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to "english".

creating directory /upgrades/postgres/postgresql-8.3.3/src/test/regress/./tmp_che
ck/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers/max_fsm_pages ... 32MB/204800
creating configuration files ... ok
creating template1 database in /upgrades/postgres/postgresql-8.3.3/src/test/regre
ss/./tmp_check/data/base/1 ... =: is not an identifier
child process exited with exit code 1
initdb: data directory "/upgrades/postgres/postgresql-8.3.3/src/test/regress/./tm
p_check/data" not removed at user's request

The material contained in this email may be confidential, privileged or copyrighted. If you are not the intended recipient, use, disclosure or copying of this information is prohibited. If you have received this document in error, please advise the sender and delete the document. Neither OneSteel nor the sender accept responsibility for any viruses contained in this email or any attachments.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Lookup penalty for VARIADIC patch

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> 2008/7/15 Tom Lane <tgl@sss.pgh.pa.us>:
>> Also, it occurs to me that we could buy back a good part of the extra
>> space if we allowed pg_proc.probin to be NULL for internal functions.
>> Right now it's always "-" in that case, which is useless ...

> probin is used in some unofficial pl hacks, so this space its some
> times used.

Sure, if you want to use it you can. I'm just saying we should allow it
to be really NULL, instead of a dummy value, when it isn't being used.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > Alvaro Herrera wrote:
> > > Bruce Momjian wrote:
> > >
> > > > Added to TODO:
> > > >
> > > > o Reduce PITR WAL file size by removing full page writes and
> > > > by removing trailing bytes to improve compression
> > >
> > > If we remove full page writes, how does hint bit setting get propagated
> > > to the slave?
> >
> > We would remove full page writes that are needed for crash recovery, but
> > perhaps keep other full pages.
>
> How do you tell which is which?

The WAL format would have to be modified to indicate which entries can
be discarded.

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pdxpug] CMD Employment opportunity

Hello,

CMD is hiring a Sysadmin/Junior PostgreSQL DBA...

http://www.commandprompt.com/about/careers/

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/

PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

Re: [ADMIN] More Autovacuum questions

Matthew T. O'Connor escribió:

> As Alvaro already said this is a case where autovacuum still isn't
> great.

While I have your attention ;-) do you have any ideas on how to improve
this? I don't see anything that looks like a solution for this case.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Core team statement on replication in PostgreSQL

Bruce Momjian wrote:
> Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> >
> > > Added to TODO:
> > >
> > > o Reduce PITR WAL file size by removing full page writes and
> > > by removing trailing bytes to improve compression
> >
> > If we remove full page writes, how does hint bit setting get propagated
> > to the slave?
>
> We would remove full page writes that are needed for crash recovery, but
> perhaps keep other full pages.

How do you tell which is which?

--
Alvaro Herrera

http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] pg_dump

-------------- Original message ----------------------
From: Bob Pawley <rjpawley@shaw.ca>
> Is there a GUI for pg_dump???
>
> Bob
>

http://www.pgadmin.org/docs/1.8/backup.html

--
Adrian Klaver
aklaver@comcast.net


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] 8.3.3 Complie issue

> Old Slackware? If you really want to compile there, I think it should
> work by just removing the -Wl,--version-script param from the
> link line.

Thanks Alvaro,
Worked through them.
Regression tests show all is OK, save for the handling of "Infinity".
Don't think that will be a problem 'cause I never go there.

Allan


The material contained in this email may be confidential, privileged or copyrighted. If you are not the intended recipient, use, disclosure or copying of this information is prohibited. If you have received this document in error, please advise the sender and delete the document. Neither OneSteel nor the sender accept responsibility for any viruses contained in this email or any attachments.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Core team statement on replication in PostgreSQL

Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > Added to TODO:
> >
> > o Reduce PITR WAL file size by removing full page writes and
> > by removing trailing bytes to improve compression
>
> If we remove full page writes, how does hint bit setting get propagated
> to the slave?

We would remove full page writes that are needed for crash recovery, but
perhaps keep other full pages.

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Lookup penalty for VARIADIC patch

2008/7/15 Tom Lane <tgl@sss.pgh.pa.us>:
> The proposed variadic-functions patch inserts some none-too-cheap code
> into FuncnameGetCandidates (it's deconstructing the proargmodes column
> to see if the function is variadic or not) which gets executed whether
> or not there are any variadic functions involved. I checked whether
> this would cause a noticeable slowdown in practice, and got a
> discouraging answer:
>
> $ cat functest.sql
> select sin(5), cos(45);
> $ pgbench -c 1 -t 10000 -n -f functest.sql regression
> transaction type: Custom query
> scaling factor: 1
> query mode: simple
> number of clients: 1
> number of transactions per client: 10000
> number of transactions actually processed: 10000/10000
> tps = 927.418555 (including connections establishing)
> tps = 928.953281 (excluding connections establishing)
>
> That's with the patch. CVS HEAD gets
> tps = 1017.901218 (including connections establishing)
> tps = 1019.724948 (excluding connections establishing)
>
> so that code is adding about 10% to the total round-trip execution time
> for the select --- considering all the other overhead involved there,
> that means the actual cost of FuncnameGetCandidates has gone up probably
> by an order of magnitude. And that's for the *best* case, where
> proargmodes is null so SysCacheGetAttr will fall out without returning
> an array to examine. This doesn't seem acceptable to me.
>
> What I'm thinking of doing is adding a column to pg_proc that provides
> the needed info in a trivial-to-get-at format. There are two ways we
> could do it: a bool column that is TRUE if the function is variadic,
> or an oid column that is the variadic array's element type, or zero
> if the function isn't variadic. The second would take more space but
> would avoid having to do a catalog lookup to get the element type in
> the case that the function is indeed variadic. I'm leaning to the
> second way but wanted to know if anyone objected?
>
> Also, it occurs to me that we could buy back a good part of the extra
> space if we allowed pg_proc.probin to be NULL for internal functions.
> Right now it's always "-" in that case, which is useless ...

probin is used in some unofficial pl hacks, so this space its some
times used. I vote for special column that containst variadic element
type

Regards
Pavel Stehule
>
> regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Core team statement on replication in PostgreSQL

Bruce Momjian wrote:

> Added to TODO:
>
> o Reduce PITR WAL file size by removing full page writes and
> by removing trailing bytes to improve compression

If we remove full page writes, how does hint bit setting get propagated
to the slave?

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] Lookup penalty for VARIADIC patch

The proposed variadic-functions patch inserts some none-too-cheap code
into FuncnameGetCandidates (it's deconstructing the proargmodes column
to see if the function is variadic or not) which gets executed whether
or not there are any variadic functions involved. I checked whether
this would cause a noticeable slowdown in practice, and got a
discouraging answer:

$ cat functest.sql
select sin(5), cos(45);
$ pgbench -c 1 -t 10000 -n -f functest.sql regression
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 1
number of transactions per client: 10000
number of transactions actually processed: 10000/10000
tps = 927.418555 (including connections establishing)
tps = 928.953281 (excluding connections establishing)

That's with the patch. CVS HEAD gets
tps = 1017.901218 (including connections establishing)
tps = 1019.724948 (excluding connections establishing)

so that code is adding about 10% to the total round-trip execution time
for the select --- considering all the other overhead involved there,
that means the actual cost of FuncnameGetCandidates has gone up probably
by an order of magnitude. And that's for the *best* case, where
proargmodes is null so SysCacheGetAttr will fall out without returning
an array to examine. This doesn't seem acceptable to me.

What I'm thinking of doing is adding a column to pg_proc that provides
the needed info in a trivial-to-get-at format. There are two ways we
could do it: a bool column that is TRUE if the function is variadic,
or an oid column that is the variadic array's element type, or zero
if the function isn't variadic. The second would take more space but
would avoid having to do a catalog lookup to get the element type in
the case that the function is indeed variadic. I'm leaning to the
second way but wanted to know if anyone objected?

Also, it occurs to me that we could buy back a good part of the extra
space if we allowed pg_proc.probin to be NULL for internal functions.
Right now it's always "-" in that case, which is useless ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] How to GROUP results BY month

On Tue, Jul 15, 2008 at 6:12 AM, Oliveiros Cristina
<oliveiros.cristina@marktest.pt> wrote:
> Howdy, all,
>
> I have a problem.
>
> I have a table which one of the fields is of type date.
>
> I need to obtain the totals of the other fields in a by-month basis
> IS there any easy way to do this using the GROUP BY or any other construct?
>

In addition to the responses on grouping by extract('month' from
timestamp) you can also index on this function as long as timestamp
isn't timestamp with timezone. With that index in place, grouping by
month can be pretty fast even for large datasets covering many months.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [GENERAL] optimizer ignoring primary key and doing sequence scan

On Tue, Jul 15, 2008 at 12:19 AM, Edoardo Panfili <edoardo@aspix.it> wrote:
> Scott Marlowe ha scritto:
>>
>> On Mon, Jul 14, 2008 at 1:54 PM, Chris Hoy <chris.hoy@hoyc.fsnet.co.uk>
>> wrote:
>>>
>>> Hi
>>>
>>> I have a number of tables in my database where the queries appear to
>>> ignoring the primary key and doing a seq scan instead, however other
>>> tables
>>> appear to be fine. I can see any difference between them.
>>>
>>> Is their any way of determination why the otimizer isn't picking up the
>>> primary key?
>>>
>>> Version 8.3.3 windows
>>>
>>> An example of a non working table is:
>>>
>>> select * from industries where industryid = 1;
>>> "Seq Scan on industries (cost=0.00..1.02 rows=1 width=116) (actual
>>> time=0.011..0.013 rows=1 loops=1)"
>>
>> According to this there's only one row in the table. why WOULD
>> postgresql use an index when it can just scan the one row table in a
>> split second.
>>
> I agree with you that it can depend on the size of the table but where you
> can read that the table contains only one row?

Actually I meant to write one page or block there, not row. But it's
the same diff really. 1 Row or 100, if they fit in an 8k block
together, pgsql isn't going to use an index to look them up. It kinda
knows which block they'll be in ahead of time.

> I try with my table (39910 rows, no index on column note)
> explain analyze select * from table where note='single example';
>
> Seq Scan on table (cost=0.00..2458.88 rows=13 width=327) (actual
> time=10.901..481.896 rows=1 loops=1)

Yeah that'll take 481 blocks scanned to get the answer.

> On the postgres manual I can find "Estimated number of rows output by this
> plan node (Again, only if executed to completion.)" regarding the third
> parameter of the explain
>
> Where is my error?

I'm not seeing an error, just a possible misunderstanding of pgsql
plans queries. Without an index above on note, it has to do a
sequential scan there's no index to shorten up the work.

The estimated number of rows are how many pgsql thinks it will get
back when it runs the query, based on the statistics it has stored
from the last time analyze was run.

The actual number of rows listed in the explain analyze output is the
actual number of rows, not the estimated number...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] pugs.postgresql.org down?

On Tue, Jul 15, 2008 at 1:09 PM, Richard Broersma
<richard.broersma@gmail.com> wrote:
> I've tried the pugs.postgresql.org site for about the last hour, and
> haven't been able to raise it. Is it down?

Marc is working on getting it fixed. It should come up fine now.

There was a DDoS attack, but it looks like it is now fixed.


--
Selena Deckelmann
United States PostgreSQL Association - http://www.postgresql.us
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Fwd: Proposal - UUID data type

Kless wrote:
> I'm sorry, but it was necessary that certain answers were answered by
> someone with wide knowledge on databases and overall about its own
> database. This one was the only way, and I believe that it has been
> enough positive, at least for the end users -every one that choose its
> data base-. At least this clarifies how is working each community, and
> what is to be true or not.


Nonsense. It was not at all necessary.

If someone wants to post on this mailing list they should do it
themselves. If not, you shouldn't cross-post for them.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] requested shared memory size overflows size_t

Hey there;

As Tom notes before maybe you're not using the right postgres.  Solaris 10 comes with a postgres, but on SPARC it's 32 bit compiled (I can't speak to x86 Solaris though).

Assuming that's not the problem, you can be 100% sure if your Postgres binary is actually 64 bit by using the file command on the 'postgres' executable.  A sample from 64 bit SPARC looks like this:

postgres:       ELF 64-bit MSB executable SPARCV9 Version 1, UltraSPARC3 Extensions Required, dynamically linked, not stripped

But x86 should show something similar.  I have run Postgres up to about 8 gigs of RAM on Solaris without trouble.  Anyway, sorry if this is obvious / not helpful but good luck :)

Steve

On Tue, Jul 15, 2008 at 10:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Uwe Bartels" <uwe.bartels@gmail.com> writes:
> When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86
> machine with solaris 10 I running in this error:
> FATAL: requested shared memory size overflows size_t

> The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.

Either it's not really a 64-bit build, or you made an error in your
math.  What did you try to set shared_buffers to, exactly?  Did you
increase any other parameters at the same time?

                       regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] pg_dump

On 15/07/2008 19:31, Bob Pawley wrote:
> Is there a GUI for pg_dump???

Yep - it's called PgAdmin!! :-)

Ray.


------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] Pugs.postgresql.org issue?

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Should be fixed now ... DDoS attack, was having delays from MCI/Worldcom
getting the offending IP blocked ...

- --On Tuesday, July 15, 2008 12:25:15 -0700 Selena Deckelmann
<selenamarie@gmail.com> wrote:

> Marc,
>
> On Tue, Jul 15, 2008 at 11:13 AM, Chander Ganesan <chander@otg-nc.com> wrote:
>> I notice today (and several times in the past) that pugs.postgresql.org is
>> non-responsive entirely (returning a blank page), or just extremely slow.
>> Not sure if anyone was/is aware of the issue, so I thought I'd post it
>> here:
>>
>> I'm going to the pugs.postgresql.org server at 200.46.204.175
>>
>> I've been trying to load the page for at least 2 minutes now...
>
> I can't login to the machine either.
>
> Marc -- can you have a look?
>
> -selena
>
>
> --
> Selena Deckelmann
> United States PostgreSQL Association - http://www.postgresql.us
> PDXPUG - http://pugs.postgresql.org/pdx
> Me - http://www.chesnok.com/daily

- --
Marc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)
Email . scrappy@hub.org MSN . scrappy@hub.org
Yahoo . yscrappy Skype: hub.org ICQ . 7615664
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkh9DZAACgkQ4QvfyHIvDvODzgCg6bw6LwMm8rJAAhCd4TzDGe5p
tRcAn0qkDWOxKElWaWo0HY+tu0PGzJlm
=Nzcp
-----END PGP SIGNATURE-----


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [ADMIN] Backup and failover process

>>> "Campbell, Lance" <lance@illinois.edu> wrote:

> What happens if you take an SQL snapshot of a database while
> creating WAL archives then later restore from that SQL snapshot and
> apply those WAL files?

What do you mean by "an SQL snapshot of a database"? WAL files only
come into play for backup techniques which involve file copies, not
dumps done using SQL commands (like pg_dump).

> Will there be a problem if the transactions
> within the newest WAL file after the SQL snapshot was taken cause
> problems when they are applied?

Point In Time Recovery (PITR) backup techniques allow you to restore
to a specified point in time, so you could restore up to the point
immediately before the problem transactions.

> I would assume yes but I wanted to
> check if there was some type of timestamp that would prevent an
issue
> from occurring?

Take another look at this section -- in particular,
recovery_target_time.

I hope this helps.

-Kevin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [ADMIN] Backup and failover process

Kevin,
I have read this documentation. I still does not answer my basic
question. What happens if you take an SQL snapshot of a database while
creating WAL archives then later restore from that SQL snapshot and
apply those WAL files? Will there be a problem if the transactions
within the newest WAL file after the SQL snapshot was taken cause
problems when they are applied? I would assume yes but I wanted to
check if there was some type of timestamp that would prevent an issue
from occurring?

Thanks,

-----Original Message-----
From: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Sent: Tuesday, July 15, 2008 12:24 PM
To: Campbell, Lance; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup and failover process

>>> "Campbell, Lance" <lance@illinois.edu> wrote:
> PostgreSQL: 8.2
> I am about to change my backup and failover procedure from dumping a
full
> file SQL dump of our data every so many minutes

You're currently running pg_dump every so many minutes?

> to using WAL files.

Be sure you have read (and understand) this section of the docs:

http://www.postgresql.org/docs/8.2/interactive/backup.html


-Kevin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[pgsql-www] pugs.postgresql.org down?

I've tried the pugs.postgresql.org site for about the last hour, and
haven't been able to raise it. Is it down?

--
Regards,
Richard Broersma Jr.

Visit the Los Angles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Fwd: Proposal - UUID data type

I'm sorry, but it was necessary that certain answers were answered by
someone with wide knowledge on databases and overall about its own
database. This one was the only way, and I believe that it has been
enough positive, at least for the end users -every one that choose its
data base-. At least this clarifies how is working each community, and
what is to be true or not.

On Jul 15, 6:45 pm, a...@oryx.com (Abhijit Menon-Sen) wrote:
> At 2008-07-15 08:34:01 -0700, jonas....@googlemail.com wrote:
>
>
>
> > An answer of Jerry Stuckle:
>
> Please stop cross-posting messages from this list to whatever MySQL list
> you're on. It's a boring, pointless waste of time at best, and at worst
> will get you written off as a troll in both places pretty soon.
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] PATCH: CITEXT 2.0 v3

"David E. Wheeler" <david@kineticode.com> writes:
> Well now that was cool to see. I got some failures, of course, but
> nothing stands out to me as an obvious bug. I attach the diffs file
> (with the citext.sql failure removed) for your perusal. What would be
> the best way for me to resolve those permission issues?

Don't run the tests in a read-only directory, perhaps.

> Or do they matter for sanity-checking citext?

Hard to tell --- I'd suggest trying to get a clean run. As for what you
have, the first diff hunk suggests you've got the wrong function
properties for citextsend/citextrecv.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Fwd: Proposal - UUID data type

First - please stop copying this list - this is not the "convince Jerry to include UUID in MySQL" mailing list.

Really - I don't care what he thinks. But, on the subjects themselves and how they apply to *PostgreSQL*:

 Non-standard features just force people to stick with that one product.   In the long run, the only people who benefit are the product developers.   

I chose PostgreSQL over MySQL because it provided numerous features - both standard and non - that I needed on the day I made my decision. I don't care about the long run as a user. One might as well say 90% of the world is wrong for using Microsoft products, because it locks one into Microsoft. One can say this - and people do say this - but none of this changes the fact that 90% of the world is relatively happy with their choice. They voted with their dollars. All decisions should be made on a cost-benefit analysis - they should not be based on some arbitrary code like "I will not choose a solution that locks me in".

Additionally - in the context of MySQL - the main reason I chose PostgreSQL over MySQL is because it provided things like CREATE VIEW, which MySQL did not at the time. People such as Jerry can pretend that standards guarantee that a feature is in all products, but it seems quite clear that just because something is a standard does NOT mean it is implemented the same everywhere, or even at all. At the time I chose PostgreSQL it was my opinion that PostgreSQL was far more standards-compliant than MySQL was going to be for at least a few years. I am glad I came to the correct conclusion. MySQL implemented ACID as an after-thought. I mean - comone.

This is incorrect. UUID at 16 bytes is already "long" in terms of being used as a primary index. In an 8K page, one can only fit 512 UUIDs (forgetting the requirement for headers) - if it was stored as 32 bytes - or 36 bytes, or 40 bytes (with punctuation), it would be at less than 256 UUIDs per page. For a join table joining one set of UUID to another set, that's < 256 vs < 128. Doubling the size of an index row roughly doubles the time to look up the value.     
 Incorrect.  Doubling the size of the index has very little effect on how long it takes to look up a value.  Intelligent databases use a binary search so doubling the size only means one additional comparison need be done.  And heavily used indexes are generally cached in memory anyway.   

Wrong. A binary search that must read double the number of pages, and compare double the number of bytes, will take double the amount of time. There are factors that will reduce this, such as if you assume that most of the pages are in memory or cache memory, therefore the time to read the page is zero, therefore it's only the time to compare bytes - but at this point, the majority of the time is spent comparing bytes, and it's still wrong. If we add in accounting for the fact that UUID is compared using a possibly inlined memcpy() compared to treating it as a string where it is variable sized, and much harder to inline (double the number of oeprations), and it's pretty clear that the person who would make such a statement as above is wrong.

As another poster wrote - why not double the size of all other data structures too. It costs nothing, right?

Why does MySQL have a 3-byte integer support if they truly believe that saving 1 byte in 4 doesn't result in a savings for keys?

Cheers,
mark

--  Mark Mielke <mark@mielke.cc> 

[pdxpug] need volunteers for PDXPUG party and sign pickup at kinkos

Hello!

1) Josh Berkus is working on the design for a sign for the PDXPUG
party! Anyone available to send the order to Kinkos and pick up the
sign this
weds/thurs?

2) Volunteers to hand out drink tickets and "man" the door at the
Gotham Tavern for the party? Probably 6pm-7:30pm, two shifts?

Thanks!!

-selena

--
Selena Deckelmann
United States PostgreSQL Association - http://www.postgresql.us
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

[BUGS] BUG #4307: INSERT fails with primary key contraint

The following bug has been logged online:

Bug reference: 4307
Logged by: Oskars Ozols
Email address: oskars.ozols@gmail.com
PostgreSQL version: 8.3
Operating system: SuSE Linux Enterprise Server
Description: INSERT fails with primary key contraint
Details:

I have following table for event log:

CREATE TABLE event_log
(
id bigint NOT NULL DEFAULT
nextval(('public.event_log_id_seq'::text)::regclass),
user_id integer,
date_time timestamp(0) without time zone,
ip_address character varying(15) NOT NULL,
action_type character varying(500) NOT NULL,
severity integer NOT NULL,
parameters text,
web_address character varying(160),
server character(1),
CONSTRAINT event_log_pkey PRIMARY KEY (id)
)
WITH (OIDS=FALSE);

During high load (processor load ~95%) of different SELECT/INSERT requests
this table starts to give following errors in db log:

2008-07-15 12:32:03 EEST ERROR: duplicate key value violates unique
constraint "event_log_pkey"
2008-07-15 12:32:03 EEST STATEMENT: INSERT INTO public.event_log
(date_time, ip_address, action_type, severity, parameters, web_address,
server, user_id, id) VALUES ('2008-07-15 12:28:50.000000',
'123.123.123.123', 'WebServices.SomeService:LogError', 70000, 'error text',
'http://123.123.123.123/WebServices/SomeService.asmx', '4', 75, 156112)

You may notice the difference in time when message was written to db log
(12:32:04) and actual time of event (12:28:50).

Currently there are ~3 million rows in event_log. Old records are regulary
deleted (autovacuum is on, too). During high peak it's possible that 20
events are finished to be written to event_log in 1 sec.
Current Start value for sequence event_log_id_seq is 8536444.

I have noticed that during high load Postgre starts to use old free sequence
values (like 156112 in my sample). Sometimes it's ok, but sometimes it fails
with error above.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[GENERAL] query optimization

Hello:

 

I need your kind assistance to debug an optimization issue.

 

The following two select statements in my book are almost identical.  One does a lookup for security type ‘CFD’ and

the other does the same lookup except for security ‘OP’.  When run with ‘CFD’  the query never returns.

When run with ‘OP’ the results return almost instantly.  If I hard code ‘CFD’ to be 5 which is the securitytypekey,

the query runs instantly.

 

 

The code for getsectypekey() is below.  Please note SECURITYTYPE contains only 28 rows.

 

Why would these queries run so differently?

 

Many thanks for taking the time to look at this issue.

 

KD

 

 

 

 

select sec.*

  from security sec  ,   positions_gsco

  where  positions_gsco.securitykey is NULL  and

         upper(substring(productid,1,3))  =  'CFD'  and   

         upper(positions_gsco.producttype)  =   'OP'  and 

          getsectypekey('CFD') = sec.securitytypekey  and  

        positions_gsco.taskrunkey  =  359  and 

        positions_gsco.issuecurrency = sec.securityissuecurrISO  and 

        positions_gsco.strikeprice  =  sec.strikeprice  and

        positions_gsco.expirationdate  =  sec.expirationdate  and

      (  positions_gsco.underlyingisin =  sec.underlyingisin  or   

             positions_gsco.underlyingcusip  =  sec.underlyingcusip   or

         positions_gsco.underlyingbloombergticker = sec. underlyingbloomberg )   ;

 

EXPLAIN

"Nested Loop  (cost=0.00..2829.87 rows=1 width=374)"

"  Join Filter: ((positions_gsco.issuecurrency = (sec.securityissuecurriso)::bpchar) AND (positions_gsco.strikeprice = sec.strikeprice) AND (positions_gsco.expirationdate = sec.expirationdate) AND ((positions_gsco.underlyingisin = (sec.underlyingisin)::bpchar) OR (positions_gsco.underlyingcusip = (sec.underlyingcusip)::bpchar) OR (positions_gsco.underlyingbloombergticker = sec.underlyingbloomberg)))"

"  ->  Seq Scan on positions_gsco  (cost=0.00..2310.40 rows=1 width=72)"

"        Filter: ((securitykey IS NULL) AND (upper("substring"((productid)::text, 1, 3)) = 'CFD'::text) AND (upper((producttype)::text) = 'OP'::text) AND (taskrunkey = 359))"

"  ->  Seq Scan on "security" sec  (cost=0.00..504.52 rows=598 width=374)"

"        Filter: (getsectypekey('CFD'::bpchar) = securitytypekey)"

 

 

** ** ** ** ** ** ** 

** ** ** ** ** ** ** 

 

select sec.*

  from security sec  , positions_gsco

 where  positions_gsco.securitykey is NULL  and 

        upper(substring(productid,1,3))  !=  'CFD'  and    

        upper(positions_gsco.producttype)  =   'OP'  and  

        getsectypekey('OP') = sec.securitytypekey  and  

        positions_gsco.taskrunkey  =   359   and

        positions_gsco.issuecurrency = sec.securityissuecurrISO  and 

        positions_gsco.putcallind  =  sec.put_call  and 

        positions_gsco.strikeprice  =  sec.strikeprice  and

        positions_gsco.expirationdate  =  sec.expirationdate  and 

       (  positions_gsco.underlyingisin =  sec.underlyingisin  or    

          positions_gsco.underlyingcusip  =  sec.underlyingcusip or 

          positions_gsco.underlyingbloombergticker  =  sec.underlyingbloomberg    )    ;

 

EXPLAIN

"Hash Join  (cost=514.99..2861.41 rows=1 width=374)"

"  Hash Cond: ((positions_gsco.issuecurrency = (sec.securityissuecurriso)::bpchar) AND (positions_gsco.putcallind = sec.put_call) AND (positions_gsco.expirationdate = sec.expirationdate))"

"  Join Filter: ((positions_gsco.strikeprice = sec.strikeprice) AND ((positions_gsco.underlyingisin = (sec.underlyingisin)::bpchar) OR (positions_gsco.underlyingcusip = (sec.underlyingcusip)::bpchar) OR (positions_gsco.underlyingbloombergticker = sec.underlyingbloomberg)))"

"  ->  Seq Scan on positions_gsco  (cost=0.00..2310.40 rows=16 width=79)"

"        Filter: ((securitykey IS NULL) AND (upper("substring"((productid)::text, 1, 3)) <> 'CFD'::text) AND (upper((producttype)::text) = 'OP'::text) AND (taskrunkey = 359))"

"  ->  Hash  (cost=504.52..504.52 rows=598 width=374)"

"        ->  Seq Scan on "security" sec  (cost=0.00..504.52 rows=598 width=374)"

"              Filter: (getsectypekey('OP'::bpchar) = securitytypekey)"

 

 

** ** ** ** ** ** ** 

** ** ** ** ** ** ** 

 

CREATE OR REPLACE FUNCTION getsectypekey(sectype_in bpchar)

  RETURNS integer AS

$BODY$

declare

    sectypekey  integer;

 

begin

select securitytypekey  into sectypekey 

    from securitytype

    where position (upper('|' || sectype_in  || '|' ) in upper(securitytypeaka) ) > 0;

--

--  did not find a type above

if sectypekey is NULL then

  select securitytypekey  into sectypekey 

    from securitytype

    where upper(securitytypeshort) = 'UNKNOWN';

end if;  

 

return sectypekey;

 

end

$BODY$

  LANGUAGE 'plpgsql' VOLATILE;

ALTER FUNCTION getsectypekey(sectype_in bpchar) OWNER TO postgres;

 

 

Kevin Duffy