Monday, September 22, 2008

Re: [PERFORM] Different execution plan

On Mon, Sep 22, 2008 at 2:40 PM, Sabin Coanda
<sabin.coanda@deuromedia.ro> wrote:
> Hi there,
>
> I am still concerned about this problem, because there is a big differences
> between the two cases, and I don't know how to identify the problem. Can
> anybody help me, please ?

Sure, first step, if you can provide the following:

pg version
os and version
output of explain analyze of each query (one good query, one bad)

In the meantime, make sure you've vacuum analyzed your db.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] [patch] fix dblink security hole

Tom Lane wrote:
> Joe Conway <mail@joeconway.com> writes:
>> Tom Lane wrote:
>>> No, the test to see if the server actually *asked* for the password is
>>> the important part at that end.
>
>> Oh, I see that now. So yes, as far as I can tell, password_from_string
>> is not used for anything anymore and should be removed.
>
> Okay. I just committed the patch without that change, but I'll go back
> and add it.


I'm not quite sure I fully understand the consequence of this change.
Does it basically mean that it's not possible to use .pgpass with dblink
for authentication?
The alternative then would be to hardcode the password in your stored
procedures, or store it in a separate table somehow?


--
Tommy Gildseth

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Initial prefetch performance testing

Gregory Stark wrote:
> Ron Mayer <rm_pg@cheapcomplexdevices.com> writes:
>> I'd rather a parameter that expressed things more in terms of
>> measurable quantities [...]
>
> ...What we're
> dealing with now is an entirely orthogonal property of your system: how many
> concurrent requests can the system handle.

Really? I'd have thought you'd want to give the OS enough guesses
about the future that it's elevator algorithms for the drive heads
don't keep seeking back-and-forth but rather do as much per sweep
across a device that they can.

> Ironically I'm pretty happy to lose this argument because EDB is interested in
> rolling this into its dynamic tuning module. If there's a consensus -- by my
> count three people have spoken up already which is more than usual -- then
> I'll gladly concede. Anyone object to going back to preread_pages? Or should
> it be prefetch_pages? prefetch_blocks? Something else?

Well - as you pointed out, I'm not on their side of the debate either.
I'm not sure what a relevant measurable parameter would be so I'm not
being too helpful in the conversation either.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Different execution plan

Hi there,

I am still concerned about this problem, because there is a big differences
between the two cases, and I don't know how to identify the problem. Can
anybody help me, please ?

TIA,
Sabin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [JDBC] COPY support in JDBC driver?

Hi all,

Regarding postgresql JDBC COPY support:

http://archives.postgresql.org/pgsql-jdbc/2008-09/msg00053.php

Truviso has been using Kalle's COPY patch for a while, and we'd like to
help fold it into the core jdbc driver. We've fixed a couple of encoding
issues, but otherwise, we've been using it without any problems. I'd
like to merge against the current trunk and re-submit the patch in the
next couple of weeks.

Thanks,
--
Maciek Sakrejda
Truviso, Inc.
http://www.truviso.com

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [HACKERS] Initial prefetch performance testing

Ron Mayer <rm_pg@cheapcomplexdevices.com> writes:

> For example, on our sites hosted with Amazon's compute cloud (a great
> place to host web sites), I know nothing about spindles, but know
> about Amazon Elastic Block Store[2]'s and Instance Store's[1]. I
> have some specs and are able to run benchmarks on them; but couldn't
> guess how many spindles my X% of the N-disk device that corresponds
> to.

Well I don't see how you're going to guess how much prefetching is optimal for
those environments either...

> For another example, some of our salesguys with SSD drives
> have 0 spindles on their demo machines.

Sounds to me like you're finding it pretty intuitive. Actually you would want
"1" because it can handle one request at a time. Actually if you have a
multipath array I imagine you would want to think of each interface as a
spindle because that's the bottleneck and you'll want to keep all the
interfaces busy.

> I'd rather a parameter that expressed things more in terms of
> measurable quantities -- perhaps seeks/second? perhaps
> random-access/sequential-access times?

Well that's precisely what I'm saying. Simon et al want a parameter to control
how much prefetching to do. That's *not* a measurable quantity. I'm suggesting
effective_spindle_count which *is* a measurable quantity even if it might be a
bit harder to measure in some environments than others.

The two other quantities you describe are both currently represented by our
random_page_cost (or random_page_cost/sequential_page_cost). What we're
dealing with now is an entirely orthogonal property of your system: how many
concurrent requests can the system handle.

If you have ten spindles then you really want to send enough requests to
ensure there are ten concurrent requests being processed on ten different
drives (assuming you want each scan to make maximum use of the resources which
is primarily true in DSS but might not be true in OLTP). That's a lot more
than ten requests though because if you sent ten requests many of them would
end up on the same devices.

In theory my logic led me to think for ten drives it would be about 30.
Experiments seem to show it's more like 300-400. That discrepancy might be a
reason to put this debate aside for now anywaysand expose the internal
implementation until we understand better what's going on there.

Ironically I'm pretty happy to lose this argument because EDB is interested in
rolling this into its dynamic tuning module. If there's a consensus -- by my
count three people have spoken up already which is more than usual -- then
I'll gladly concede. Anyone object to going back to preread_pages? Or should
it be prefetch_pages? prefetch_blocks? Something else?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] Interval literal rounding bug(?) and patch.

*** a/src/backend/utils/adt/datetime.c
--- b/src/backend/utils/adt/datetime.c
***************
*** 2888,2894 **** DecodeInterval(char **field, int *ftype, int nf, int range,
{
case DTK_MICROSEC:
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += val + fval;
#else
*fsec += (val + fval) * 1e-6;
#endif
--- 2888,2894 ----
{
case DTK_MICROSEC:
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint(val + fval);
#else
*fsec += (val + fval) * 1e-6;
#endif
***************
*** 2897,2903 **** DecodeInterval(char **field, int *ftype, int nf, int range,

case DTK_MILLISEC:
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += (val + fval) * 1000;
#else
*fsec += (val + fval) * 1e-3;
#endif
--- 2897,2903 ----

case DTK_MILLISEC:
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint((val + fval) * 1000);
#else
*fsec += (val + fval) * 1e-3;
#endif
***************
*** 2907,2913 **** DecodeInterval(char **field, int *ftype, int nf, int range,
case DTK_SECOND:
tm->tm_sec += val;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += fval * 1000000;
#else
*fsec += fval;
#endif
--- 2907,2913 ----
case DTK_SECOND:
tm->tm_sec += val;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint(fval * 1000000);
#else
*fsec += fval;
#endif
***************
*** 2932,2938 **** DecodeInterval(char **field, int *ftype, int nf, int range,
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += (fval - sec) * 1000000;
#else
*fsec += fval - sec;
#endif
--- 2932,2938 ----
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint((fval - sec) * 1000000);
#else
*fsec += fval - sec;
#endif
***************
*** 2950,2956 **** DecodeInterval(char **field, int *ftype, int nf, int range,
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += (fval - sec) * 1000000;
#else
*fsec += fval - sec;
#endif
--- 2950,2956 ----
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint((fval - sec) * 1000000);
#else
*fsec += fval - sec;
#endif
***************
*** 2969,2975 **** DecodeInterval(char **field, int *ftype, int nf, int range,
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += (fval - sec) * 1000000;
#else
*fsec += fval - sec;
#endif
--- 2969,2975 ----
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint((fval - sec) * 1000000);
#else
*fsec += fval - sec;
#endif
***************
*** 2995,3001 **** DecodeInterval(char **field, int *ftype, int nf, int range,
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += (fval - sec) * 1000000;
#else
*fsec += fval - sec;
#endif
--- 2995,3001 ----
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint((fval - sec) * 1000000);
#else
*fsec += fval - sec;
#endif
***************
*** 3022,3028 **** DecodeInterval(char **field, int *ftype, int nf, int range,
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += (fval - sec) * 1000000;
#else
*fsec += fval - sec;
#endif
--- 3022,3028 ----
sec = fval;
tm->tm_sec += sec;
#ifdef HAVE_INT64_TIMESTAMP
! *fsec += rint((fval - sec) * 1000000);
#else
*fsec += fval - sec;
#endif
I think it's a bug that these 3 different ways of writing 0.7 seconds
produce different results from each other on HEAD.

head=# select interval '0:0:0.7', interval '@ 0.70 secs', interval '0.7 seconds';
interval | interval | interval
-------------+-----------------+-----------------
00:00:00.70 | 00:00:00.699999 | 00:00:00.699999
(1 row)

The attached patch will make all of those output "00:00:00.70" which.

Postgres 8.3 tended to output the "00:00:00.70" like this patch, I believe
because it didn't default to HAVE_INT64_TIMESTAMP like HEAD is. The patch
seems to pass the existing regression tests.

Does this seem reasonable?

Ron

Re: [JDBC] PreparedStatement.toString() creates valid SQL; was: Q: use setObject also for int or string

Attached file contains all changed java sources to make your testcase working.
I added to ParameterList interface a method toSQLString(int).

I did not implement the InputStream because
1) I do not know the encoding of data
2) it could be an InputStream which can not be reset

Michael

> See the attached test case.

--
WINCOR NIXDORF International GmbH
Sitz der Gesellschaft: Paderborn
Registergericht Paderborn HRB 3507
Geschäftsführer: Eckard Heidloff (Vorsitzender), Stefan Auerbach, Dr. Jürgen Wunram
Vorsitzender des Aufsichtsrats: Karl-Heinz Stiller
Steuernummer: 339/5884/0020 - Ust-ID Nr.: DE812927716 - WEEE-Reg.-Nr. DE44477193

Diese E-Mail enthält vertrauliche Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

This e-mail may contain confidential information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden.

Re: [HACKERS] parallel pg_restore

Simon Riggs wrote:
>> I will not argue vehemently here but I will say that "jobs" doesn't
>> seem correct. The term "workers" seems more appropriate.
>>
>
> Agreed, but most utilities have "j" free but not w, p, t or other
> letters that might be synonyms.
>
> j is at least used for exactly this purpose in other tools.
>
>

There are in fact very few letters available, as we've been fairly
profligate in our use of option letters in the pg_dump suite.

j and m happen to be two of those that are available.

I honestly don't have a terribly strong opinion about what it should be
called. I can live with jobs or multi-threads.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Initial prefetch performance testing

On Mon, 22 Sep 2008, Gregory Stark wrote:

> Hm, I'm disappointed with the 48-drive array here. I wonder why it maxed out
> at only 10x the bandwidth of one drive. I would expect more like 24x or more.

The ZFS RAID-Z implementation doesn't really scale that linearly. It's
rather hard to get the full bandwidth out of a X4500 with any single
process, and I haven't done any filesystem tuning to improve
things--everything is at the defaults.

> I'm quite surprised Solaris doesn't support posix_fadvise -- perhaps
> it's in some other version of Solaris?

Both the systems I used were standard Solaris 10 boxes and I'm not aware
of any changes in this area in the later OpenSolaris releases (which is
where I'd expect something like this to change first). The test program I
tried failed to find #ifdef POSIX_FADV_WILLNEED, and the message I saw
from you at
http://archives.postgresql.org/message-id/877imua265.fsf@oxford.xeocode.com
suggested you didn't find any fadvise either so I didn't look much
further.

The above is a cue for someone from Sun to chime in on this subject.

> I have an updated patch I'll be sending along shortly. You might want to test
> with that?

Obviously I've got everything setup to test right now, am currently
analyzing your earlier patch and the sequential scan fork that derived
from it. If you've got a later version of the bitmap heap scan one as
well, I'll replace the one I had been planning to test (your
bitmap-preread-v9) with that one when it's available.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] FSM patch - performance test

Tom Lane wrote:
> What this means is that if we start with "next" pointing at a page
> without enough space (quite likely considering that we now index all
> pages not only those with free space), then it is highly possible that
> the search will end on a page *before* where next was. The most trivial
> case is that we have an even-numbered page with a lot of free space and
> its odd-numbered successor has none --- in this case, far from spreading
> out the backends, all comers will be handed back that same page! (Until
> someone reports that it's full.) In general it seems that this behavior
> will tend to concentrate the returned pages in a small area rather than
> allowing them to range over the whole FSM page as was intended.

Good point.

> So the bottom line is that the "next" addition doesn't actually work and
> needs to be rethought. It might be possible to salvage it by paying
> attention to "next" during the descent phase and preferentially trying
> to descend to the right of "next"; but I'm not quite sure how to make
> that work efficiently, and even less sure how to wrap around cleanly
> when the starting value of "next" is near the last slot on the page.

Yeah, I think it can be salvaged like that. see the patch I just posted
on a separate thread.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] question on pg_ctl

Many thanks, Scott!

Tena


-----Original Message-----
From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
Sent: Mon 9/22/2008 11:21 AM
To: Tena Sakai
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] question on pg_ctl

On Mon, Sep 22, 2008 at 11:49 AM, Tena Sakai <tsakai@gallo.ucsf.edu> wrote:
> Hi Everybody,
>
> I want to change a line in postgresql.conf (shared_buffers)
> and then issue a command:
>   pg_ctl reload
>
> This forces the postmaster process to reread the configuration
> file with new shared_buffer value.  And the postgres process
> that comes to life from this point on will execute with the
> newly specified value.  Correct?

To quote the postgresql.conf file:

shared_buffers = 150MB                  # min 128kB or max_connections*16kB
                                        # (change requires restart)

So it requires a complete restart.  Note that for the things you can
reload on, there's no real effect on running queries or connections
(at least no detrimental ones I can think of.)

Re: [HACKERS] pg_regress inputdir

Peter Eisentraut wrote:
> Tom Lane wrote:
>> But I think Alvaro is worried about something
>> at a higher level: the regression test process as a whole has some
>> directory layout assumptions built into it, particularly in regards
>> to where to find .so's.
>
> The only information about the location of the .so's is in the test
> files themselves, which seems reasonable, because they are created and
> installed at the same time as the .so's that they are presumably
> supposed to test. So I see no problem here.

Here is a more involved patch that fixes all these issues. The major
simplication is that the input files are looked for in both the build
tree and the source tree (like a vpath search), which allowed me to
remove a lot of redundant makefile code. I could also remove the
--srcdir option but added --dlpath to address the above mentioned issue
and changed some option defaults. Now you can run pg_regress inside and
outside of the build tree. It isn't quite ready for the general public,
but a packager that wants to adopt this can use it. Currently, you need
to create the directories sql, expected, and testtablespace yourself,
when running outside the build tree. We can attempt to sort that out
later, but SELinux might make it difficult.

[pgsql-es-ayuda] Nuevas versiones menores liberadas

Saludos,

El PGDG, ha liberado hoy (Septiembre 22 del 2008) nuevas versiones
menores de PostgreSQL.

Estas son actualizaciones de seguridad que corrigen una serie de
fallas reportadas en los últimos meses. Algunas de estas fallas
representan un problema de posible perdida de información por lo que
se recomienda la actualización en el siguiente mantenimiento
planificado.

Tambien se han actualizado cambios en las zonas horarias debido a
cambios en las leyes locales, estos cambios afectan a las siguientes
zonas horarias: Argentina, Bahamas, Brazil, Marruecos, Mauritius,
Palestina, Paquistan y Paraguay

Las versiones liberadas son: 8.3.4, 8.2.10, 8.1.14, 8.0.18 and 7.4.22.
Excepto en windows debido a que, como se anuncio antes, las versiones
8.0 y 8.1 ya no son soportadas en ese SO.

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157
--
TIP 8: explain analyze es tu amigo

Re: [ADMIN] question on pg_ctl

On Mon, Sep 22, 2008 at 11:49 AM, Tena Sakai <tsakai@gallo.ucsf.edu> wrote:
> Hi Everybody,
>
> I want to change a line in postgresql.conf (shared_buffers)
> and then issue a command:
> pg_ctl reload
>
> This forces the postmaster process to reread the configuration
> file with new shared_buffer value. And the postgres process
> that comes to life from this point on will execute with the
> newly specified value. Correct?

To quote the postgresql.conf file:

shared_buffers = 150MB # min 128kB or max_connections*16kB
# (change requires restart)

So it requires a complete restart. Note that for the things you can
reload on, there's no real effect on running queries or connections
(at least no detrimental ones I can think of.)

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] get_relation_stats_hook()

On Mon, 2008-09-22 at 18:41 +0100, Gregory Stark wrote:

> The easiest way to fix this seems like also the best way, instead of storing a
> boolean store the pointer to the release function.

OK, I like that better anyhow.

Hadn't thought about turning plugin off, but I can see the requirement
now your raise it.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Initial prefetch performance testing

Gregory Stark wrote:
> Simon Riggs <simon@2ndQuadrant.com> writes:
>> I'm not in favour of introducing the concept of spindles....
>
> In principle I quite strongly disagree with this....
> Number of blocks to prefetch is an internal implementation detail that the DBA
> has absolutely no way to know what the correct value is.

Even more often on systems I see these days, "spindles"
is an implementation detail that the DBA has no way to know
what the correct value is.

For example, on our sites hosted with Amazon's compute cloud (a great
place to host web sites), I know nothing about spindles, but know
about Amazon Elastic Block Store[2]'s and Instance Store's[1]. I
have some specs and are able to run benchmarks on them; but couldn't
guess how many spindles my X% of the N-disk device that corresponds
to. For another example, some of our salesguys with SSD drives
have 0 spindles on their demo machines.

I'd rather a parameter that expressed things more in terms of
measurable quantities -- perhaps seeks/second? perhaps
random-access/sequential-access times?

[1] http://www.amazon.com/gp/browse.html?node=201590011
[2] http://www.amazon.com/b/ref=sc_fe_c_0_201590011_1?ie=UTF8&node=689343011&no=201590011&me=A36L942TSJ2AJA


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Initial prefetch performance testing

On Mon, 2008-09-22 at 13:06 -0400, Greg Smith wrote:

> > prefetch_... is a much better name since its an existing industry term.
> > I'm not in favour of introducing the concept of spindles, since I can
> > almost hear the questions about ramdisks and memory-based storage.
>
> It's possible to make a case for exposing the internal number that's
> getting varied here, naming the parameter something like prefetch_depth,
> and letting people set that to whatever they want. Based on the current
> data I might suggest a default of 256, using 0 to turn the feature off
> altogether, and a maximum of at least 8192 and possibly more.
>
> In practice I expect there to only be a couple of popular values and the
> idea of fine-tuning is a bit questionable. I think that's what Greg Stark
> was driving at with how the value was re-spun. Instead of using
> effective_spindle_count, you could just as easily make a case for an enum
> like [off,low,medium,high] mapping to [0,16,256,8192]. From what I've
> seen so far, that would reduce tweaking time in the field considerably
> while not really changing the range of available behavior very much.

Tuning Postgres I/O already involves quite a few parameters called
buffersize, segment width, stripe size, etc.. I've never heard anything
from a disk manufacturer say this is wrong and we should just have
"spindle equivalents". I don't think we should dress this up too much,
that's all. We aren't going to make anybody's life any easier. But we
will probably generate lots of annoying phone calls to disk
manufacturers asking "so how many spindles is your subsystem worth in
Postgres terms?" to which they will shrug and say "no idea".

Is the behaviour of this sufficiently linear to be able to say that 3
spindles = 3 effective_spindles and 6=6 etc.? I would guess it won't be
and you're left with a name more misleading than useful.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] parallel pg_restore

On Mon, 2008-09-22 at 09:30 -0700, Joshua Drake wrote:
> On Mon, 22 Sep 2008 17:24:28 +0100
> Simon Riggs <simon@2ndQuadrant.com> wrote:
>
> > > More importantly, I'm not convinced it's a good idea. It seems more
> > > like a footgun that will potentially try to launch thousands of
> > > simultaneous restore connections. I should have thought that
> > > optimal performance would be reached at some small multiple (say
> > > maybe 2?) of the number of CPUs on the server. You could achieve
> > > unlimited parallelism by saying something like --jobs=99999, but
> > > I'd rather that were done very explicitly instead of as the default
> > > value of the parameter.
> >
> > OK, sounds best.
> >
>
> I will not argue vehemently here but I will say that "jobs" doesn't
> seem correct. The term "workers" seems more appropriate.

Agreed, but most utilities have "j" free but not w, p, t or other
letters that might be synonyms.

j is at least used for exactly this purpose in other tools.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] allow has_table_privilege(..., 'usage') on sequences

On Sun, Sep 7, 2008 at 10:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Abhijit Menon-Sen <ams@oryx.com> writes:
>> (I can't help but think that the USAGE privilege is a bit unfortunate.
>> If granting SELECT rights allowed currval(), INSERT allowed nextval(),
>> and UPDATE allowed nextval() and setval(), then has_table_privilege()
>> would have been sufficient and there would be no need to invent a new
>> set of functions just to check USAGE.
>
> That train left the station already, and anyway you are failing to
> consider "SELECT * FROM sequence", which definitely needs to have
> different privileges from nextval()/currval().
>

can we tell there is consensus in create a new has_sequence_privilege()?
Abhijit will you make it? if not i can make a try...

--
regards,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] question on pg_ctl

Hi Everybody,

I want to change a line in postgresql.conf (shared_buffers)
and then issue a command:
  pg_ctl reload

This forces the postmaster process to reread the configuration
file with new shared_buffer value.  And the postgres process
that comes to life from this point on will execute with the
newly specified value.  Correct?

I have jobs that have been running more than a week.  Can somebody
please tell me what happens to these processes?  Ie., (a) will they
keep running in the current mode?  (b) will the postmaster assign
more memory while they are running?

Thank you.

Regards,

Tena Sakai
tsakai@gallo.ucsf.edu

[HACKERS] FSM, now without WAL-logging

Attached is a revamped version of the FSM rewrite. WAL-logging is now
gone. Any inconsistencies between levels of the FSM is fixed during
vacuum, and by searchers when they run into a dead end because of a
discrepancy. Corruption within FSM pages is likewise fixed by vacuum and
searchers.

The FSM in a warm standby gets updated by replay of heap CLEAN WAL
records. That means that the FSM will reflect the situation after the
last vacuum, which is better than what we have now, but not quite
up-to-date. I'm worried that this might not be enough, because on a
large table, a lot of pages could've been filled since last vacuum, and
the first guy who tries to insert to the table will have to grind
through all those pages, finding that they're all full now. It would be
simple to update the FSM at every heap insert and update record, but
that then might be an unacceptable amount of overhead at recovery. Also,
the index FSM is not yet updated during recovery.

The FSM is now extended lazily, so there's no explicit
FreeSpaceMapExtendRel function anymore. To avoid calling smgrnblocks all
the time, I added a field to RelationData to cache the size of the FSM fork.

The fsm_search_avail() function now emulates the old FSM behavior more
closely. It should now always return the next page to the right of the
next-pointer (wrapping around if necessary).

I believe I've addressed all the other Tom's comments on the code as
well. Also, the "next-pointer" is now reset in vacuum, to encourage the
use of low-numbered pages, per Bruce's comment.

There's one known bug left. If we crash after truncating a relation, and
the truncation of the FSM doesn't hit the disk but the truncation of the
main fork does, we can end up with an FSM that shows that there's free
space on pages that don't exist anymore. The straightforward way to fix
that is to put back the WAL-logging of FSM truncations, but given that I
just ripped off all the rest of the WAL-logging, does anyone have other
ideas?

TODO:
- Performance testing, again.
- Add test case to regression suite that exercises index FSM
- Fix the crash+truncate bug
- Update index FSM during recovery
- Update FSM more frequetly during recovery
- Documentation

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

[pgsql-jobs] Part-time work

Hi List;

See the attached job listing, if interested respond to me (kevin@kevinkempterllc.com
) and I'll pass it along.

Thx

/Kevin


-------------------------------------------------------------
We have several postgres projects in the works and
our current staff cannot keep up.

We'd like to find a couple of experienced postgres
resources for development, scripting, general DBA tasks, etc

at this point we're thinking this will be a series of
side gigs to build defined independent components

We're thinking it would be keen to do some fixed bid
or piece work to keep our costs under control, however
we're open to whatever works for you.

If interested send us the following:
- current resume/CV
- typical rate
- part time availability
-------------------------------------------------------------

--
Sent via pgsql-jobs mailing list (pgsql-jobs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jobs

Re: [HACKERS] get_relation_stats_hook()

Hm, I assume we want to be able to turn on and off plugins in a running
session? I think the "free_using_plugin" flag:


! if (get_relation_stats_hook)
! vardata->statsTuple = (*get_relation_stats_hook)
! (rte->relid,
! var->varattno);
!
! if (vardata->statsTuple)
! vardata->free_using_plugin = true;
! else
! vardata->statsTuple = SearchSysCache(STATRELATT,

is insufficient to handle this. vardata->free_using_plugin could be true but
by the time the variable is released the plugin pointer could have been
cleared. Or worse, set to a different plugin.

The easiest way to fix this seems like also the best way, instead of storing a
boolean store the pointer to the release function.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ODBC] compiling odbc

That's the thing, I have unixODBC installed, it is located at /opt/local/bin that is why I need help figuring out how to get the terminal to recognize where both pg_config AND obdcinst are at during the ./configure process.  
PATH=$PATH:/usr/local/pgsql/bin:/opt/local/bin   should work but it's not recognizing the second part.....the odncinst part.


----- Original Message ----
From: Zoltan Boszormenyi <zb@cybertec.at>
To: Brent Austin <brent1a@yahoo.com>
Cc: Jeremy Faith <jfaith@cemsys.com>; pgsql-odbc@postgresql.org
Sent: Monday, September 22, 2008 5:36:19 AM
Subject: Re: [ODBC] compiling odbc

Brent Austin írta:
> I still get this:   
> client-66-1xx-1x-x14:~ brent1a$
> PATH=$PATH:/usr/local/pgsql/bin:/opt/local/bin
> client-66-1xx-1x-x14:~ brent1a$ cd /psqlodbc-08.03.0200
> client-66-1xx-1x-x14:psqlodbc-08.03.0200 brent1a$ sudo ./configure
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for gawk... no
> checking for mawk... no
> checking for nawk... no
> checking for awk... awk
> checking whether make sets $(MAKE)... yes
> checking whether to enable maintainer-specific portions of Makefiles... no
> checking for pg_config... /usr/local/pgsql/bin/pg_config
> checking for gcc... gcc
> ...........
> checking how to hardcode library paths into programs... immediate
> checking for library containing SQLGetPrivateProfileString... no
> *configure: error: unixODBC library "odbcinst" not found*
> client-66-1xx-1x-x14:psqlodbc-08.03.0200 brent1a$

You need unixODBC or iODBC installed. unixODBC is used by default,
you can change the expected ODBC manager with option --with-iodbc.

> ----- Original Message ----
> From: Jeremy Faith <jfaith@cemsys.com>
> To: pgsql-odbc@postgresql.org
> Sent: Monday, September 22, 2008 4:06:28 AM
> Subject: Re: [ODBC] compiling odbc
>
> Hi,
>
> I think what Tom is saying is to try:-
>  PATH=$PATH:/usr/local/pgsql:/opt/local/bin
>
> Regards,
> Jeremy Faith
>
> Brent Austin wrote:
> > Yes, two PATH items, I guess.
> > "export PATH=$PATH:/usr/local/pgsql"
> >
> > takes care of:
> >
> > "checking for pg_config... no
> > configure: error: pg_config not found (set PG_CONFIG environment
> > variable)"
> >
> > How can I take care of the following as well with out messing up the
> > above?
> >
> > "configure: error: unixODBC library "odbcinst" not found"
> >
> > (I found "odbcinst" is in my /opt/local/bin)
> >
> > ----- Original Message ----
> > From: Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>>
> > To: Brent Austin <brent1a@yahoo.com <mailto:brent1a@yahoo.com>>
> > Cc: Jeremy Faith <jfaith@cemsys.com <mailto:jfaith@cemsys.com>>;
> pgsql-odbc@postgresql.org <mailto:pgsql-odbc@postgresql.org>
> > Sent: Friday, September 19, 2008 3:11:36 PM
> > Subject: Re: [ODBC] compiling odbc
> >
> > Brent Austin <brent1a@yahoo.com <mailto:brent1a@yahoo.com>
> <mailto:brent1a@yahoo.com <mailto:brent1a@yahoo.com>>> writes:
> > > How do I tell configure that pgsql is in /usr/local/pgsql and ALSO
> > tell it that obcinst is in /opt/local/bin ???
> >
> > Two PATH items?
> >
> >            regards, tom lane
> >
>
>
> --
> Sent via pgsql-odbc mailing list (pgsql-odbc@postgresql.org
> <mailto:pgsql-odbc@postgresql.org>)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-odbc
>


--
----------------------------------
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


Re: [lapug] Requesting LAPUG Speakers for Next Year

On Mon, Sep 22, 2008 at 10:11 AM, Charles Wyble <charles@thewybles.com> wrote:
> I would be willing to present in February on using geospatial data in
> Postgresql (including data loading and client tools) if anyone is interested
> in that?

This sounds good. I will pencil you in for February! Thanks!

Would anyone else like to volunteer?


--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via lapug mailing list (lapug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/lapug

Re: [GENERAL] match an IP address

On Mon, Sep 22, 2008 at 11:16 AM, Joao Ferreira gmail
<joao.miguel.c.ferreira@gmail.com> wrote:
> well...
>
> my IP addresses are stored in a TEXT type field. that field can actually
> contain usernames like 'joao' or 'scott' and it can contain IP
> addresses....

Then cast them to inet and use the method I showed above:

postgres=# create table b as select a::text from inettest ;
SELECT
postgres=# select * from b;
a
----------------
192.168.0.1/32
192.168.1.1/32
10.0.0.1/32
(3 rows)

postgres=# select a from b where '192.168.0.1/0' >> a::inet;
a
----------------
192.168.0.1/32
192.168.1.1/32
10.0.0.1/32
(3 rows)

postgres=# select a from b where '192.168.0.1/24' >> a::inet;
a
----------------
192.168.0.1/32

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] match an IP address

well...

my IP addresses are stored in a TEXT type field. that field can actually
contain usernames like 'joao' or 'scott' and it can contain IP
addresses....

:(

joao


On Mon, 2008-09-22 at 11:13 -0600, Scott Marlowe wrote:
> On Mon, Sep 22, 2008 at 10:59 AM, Joao Ferreira gmail
> <joao.miguel.c.ferreira@gmail.com> wrote:
> > hello all,
> >
> > I'm unable to build a LIKE or SIMILAR TO expression for matching and ip
> > address
> >
> > 192.168.90.3
> > 10.3.2.1
>
> As already mentioned inet / cidr types should work. Example:
>
> postgres=# create table inettest (a inet);
> CREATE TABLE
> postgres=# insert into inettest values
> ('192.168.0.1'),('192.168.1.1'),('10.0.0.1');
> INSERT 0 3
> postgres=# select a from inettest where '192.168.0.1/16' >> a;
> a
> -------------
> 192.168.0.1
> 192.168.1.1
> (2 rows)
> postgres=# select a from inettest where '192.168.0.1/24' >> a;
> a
> -------------
> 192.168.0.1
> (1 row)
> postgres=# select a from inettest where '192.168.0.1/0' >> a;
> a
> -------------
> 192.168.0.1
> 192.168.1.1
> 10.0.0.1
> (3 rows)
>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[ANNOUNCE] MyJSQLView 2.91 Released

Version 2.91 of the MyJSQLView application has been released. It
is recommended any users in the PostgreSQL community that use the
application update to this newer version. The release fixed some
minor bugs and performance bottle necks for the PostgreSQL database.
Full documentation and download information can be found at:

http://myjsqlview.sourceforge.net/

--
Dana M. Proctor
MyJSQLView Project Admin.
http://dandymadeproductions.com/

---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

Re: [GENERAL] match an IP address

On Mon, Sep 22, 2008 at 10:59 AM, Joao Ferreira gmail
<joao.miguel.c.ferreira@gmail.com> wrote:
> hello all,
>
> I'm unable to build a LIKE or SIMILAR TO expression for matching and ip
> address
>
> 192.168.90.3
> 10.3.2.1

As already mentioned inet / cidr types should work. Example:

postgres=# create table inettest (a inet);
CREATE TABLE
postgres=# insert into inettest values
('192.168.0.1'),('192.168.1.1'),('10.0.0.1');
INSERT 0 3
postgres=# select a from inettest where '192.168.0.1/16' >> a;
a
-------------
192.168.0.1
192.168.1.1
(2 rows)
postgres=# select a from inettest where '192.168.0.1/24' >> a;
a
-------------
192.168.0.1
(1 row)
postgres=# select a from inettest where '192.168.0.1/0' >> a;
a
-------------
192.168.0.1
192.168.1.1
10.0.0.1
(3 rows)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Where to Host Project

On Sep 22, 2008, at 10:08, Stefan Kaltenbrunner wrote:

>>> The machine is ready to go and as far as I know even has a jail.
>>> Stefan
>>> would know more.
>> OK, cool. Stefan; what's your take on where we're at?
>
> yeah there is a box and a jail I set up a while ago but for various
> reasons the actual migration (planning and testing) never happened.
> I'm still prepared to handle the required sysadmin level work but I
> don't have time for anything more fancy right now.

If this upgrade happens, and I can use SVN with pgFoundry, that's
exactly where I'll stay. That would make me happy.

Whether or not it was a good idea to get into the hosting business,
since we do, as a community, have a hosting platform, it behooves us
to try to keep it up-to-date. I'd be willing to give a bit of time for
this.

But I do agree with Robert that we *should* get into the indexing
business. This is CPAN's secret: It doesn't host anything, but
provides a distributed index of Perl modules. What would be useful is
to make it easy for people to add their stuff to the index; and if
that could be automated with pgFoundry, so much the better for those
who host there.

My $0.02. Thanks for the discussion, folks.

Best,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [lapug] Requesting LAPUG Speakers for Next Year

Richard Broersma wrote:
> We have open slots for speakers starting January of 2009.

Fantastic and scary at the same time! :) I know as I have been actively
involved with the leadership of numerous user groups in Southern
California, and booking speakers is both easy and difficult. The easy
part is finding people for a topic. The hard part is figuring out topics
people want speakers on. :)
> I would
> like to open an invitation to anyone that would like to present
> something PG or DB related to LAPUG.

I would be willing to present in February on using geospatial data in
Postgresql (including data loading and client tools) if anyone is
interested in that?
> Perhaps we could even try our
> hand at having lightning talks were several volunteers give a 5 minute
> discussion on a particular subject.
>

Yes. That would be incredibly cool! More and more groups I am involved
with have been taking this approach lately with great success.
> I look for to hearing from you!
>
> Also, I would like to start a dialog regarding meeting nights. Would
> anyone like to proposed having meetings on nights other than Fridays?
>
>

I am ok with Friday nights but not opposed to other nights. Just be
aware of potentially conflicting group meetings. See

http://la.garysguide.org/
http://www.lageeks.org/

for details on other events.

I think Wednesday is pretty safe with no conflicts.


--
Charles Wyble (818) 280 - 7059
http://charlesnw.blogspot.com
CTO Known Element Enterprises / SoCal WiFI project


--
Sent via lapug mailing list (lapug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/lapug

Re: [pgus-general] On OpenID

-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkjX0bkACgkQvJuQZxSWSsiavgCguorD4wWqpMRiG13c97AGjzxH
1+EAoJ0i//H4CZW9Wc0CYN1ya92FptS+
=7qwa
-----END PGP SIGNATURE-----
> FYI I have disabled OpenID for now as it didn't work correctly. As we
> move forward if someone is willing to help us get it to work I am all
> for it. I am however, not that person.

I'll take a swing at it. Or perhaps I should make that part of my
platform? :)

--
Greg Sabino Mullane greg@endpoint.com
End Point Corporation

Re: [HACKERS] Where to Host Project

Dave Page wrote:
> On Sat, Sep 20, 2008 at 8:37 AM, Joshua D. Drake <jd@commandprompt.com> wrote:
>> Dave Page wrote:
>>
>>> Well that's not strictly true - I persuaded one of the GForge
>>> developers to work on the upgrade. As far as I'm aware, we're still
>>> waiting for the hardware/OS platform to be sorted out after some
>>> initial problems. I suspect JD will tell me something different though
>>> - that being the case, perhaps we can work out the issues and get on
>>> with the upgrade.
>> The machine is ready to go and as far as I know even has a jail. Stefan
>> would know more.
>
> OK, cool. Stefan; what's your take on where we're at?

yeah there is a box and a jail I set up a while ago but for various
reasons the actual migration (planning and testing) never happened.
I'm still prepared to handle the required sysadmin level work but I
don't have time for anything more fancy right now.

Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Initial prefetch performance testing

On Mon, 22 Sep 2008, Simon Riggs wrote:

> I'd prefer to set this as a tablespace level storage parameter.

That seems reasonable, but I'm not working at that level yet. There's
still a larger open questions about how the buffer manager interaction
will work here, and I'd like to have a better view of that first before
getting into the exact syntax used to set the parameter. For now, a GUC
works well enough, but you're right that something finer-grained may make
sense before this actually hits the codebase.

> prefetch_... is a much better name since its an existing industry term.
> I'm not in favour of introducing the concept of spindles, since I can
> almost hear the questions about ramdisks and memory-based storage.

It's possible to make a case for exposing the internal number that's
getting varied here, naming the parameter something like prefetch_depth,
and letting people set that to whatever they want. Based on the current
data I might suggest a default of 256, using 0 to turn the feature off
altogether, and a maximum of at least 8192 and possibly more.

In practice I expect there to only be a couple of popular values and the
idea of fine-tuning is a bit questionable. I think that's what Greg Stark
was driving at with how the value was re-spun. Instead of using
effective_spindle_count, you could just as easily make a case for an enum
like [off,low,medium,high] mapping to [0,16,256,8192]. From what I've
seen so far, that would reduce tweaking time in the field considerably
while not really changing the range of available behavior very much.

I will be running a set of tests on a fast SSD device before I'm done,
that's another one that I'll try once I've got the database-level tests
ready to run, too. What I expect is that it will favor 0, presumably you
might as well just read the blocks rather than advise about them when the
seek overhead is close to zero. Should be able to do a RAM disk run as
well.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] match an IP address

On 22/09/2008 17:59, Joao Ferreira gmail wrote:

> I'm unable to build a LIKE or SIMILAR TO expression for matching and ip
> address

There are built-in types in PG for handling IP addresses - are they any
use to you?

If not, there's a useful site here which may get you started:

http://regexlib.com/default.aspx


Ray.

------------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
------------------------------------------------------------------


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] match an IP address

On Mon, Sep 22, 2008 at 05:59:25PM +0100, Joao Ferreira gmail wrote:
> I'm unable to build a LIKE or SIMILAR TO expression for matching and ip
> address
> 192.168.90.3
> 10.3.2.1
> any help please...

any reason not to use standard inet datatype? which does the validation.

Best regards,

depesz

--
Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/
jid/gtalk
: depesz@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[lapug] Requesting LAPUG Speakers for Next Year

We have open slots for speakers starting January of 2009. I would
like to open an invitation to anyone that would like to present
something PG or DB related to LAPUG. Perhaps we could even try our
hand at having lightning talks were several volunteers give a 5 minute
discussion on a particular subject.

I look for to hearing from you!

Also, I would like to start a dialog regarding meeting nights. Would
anyone like to proposed having meetings on nights other than Fridays?


--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via lapug mailing list (lapug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/lapug

[GENERAL] match an IP address

hello all,

I'm unable to build a LIKE or SIMILAR TO expression for matching and ip
address

192.168.90.3
10.3.2.1

any help please...


thanks
joao

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[lapug] September's Meeting Reminder

I wanted to send out a reminder about this Friday's LAPUG Meeting.
The LAPUG website has a map to the location where we are meeting.
Also, I wanted to thank Michael who is not only providing the food but
is also presenting the discussion on SLONY.

--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via lapug mailing list (lapug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/lapug

Re: [ADMIN] Error while trying to back up database: out of memroy

As Tom mentioned, it sounds like you're being bitten by the oom
killer. If, for some reason, you cannot run with it turned off, then
add a really big swap space so it delays the onset of sudden death by
oom to something really big.

Is It possible your work_mem is set too high?

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [SQL] exclusion query

On Mon, 2008-09-22 at 16:34 +0200, Louis-David Mitterrand wrote:
>
>
> To select person_type's used in a certain event_type I have this
> query:
>
> select distinct pt.type
> from person_type pt
> natural join person_to_event
> join event e using (id_event)
> natural join event_type et
> where et.type_fr='théâtre';
>
> Now, I'd like to select person_type's _not_ used in a certain
> particular
> event (say id_event=219).
>
> I can see how to build a quey to that effect, but is there a more
> obvious, clean, short solution? Something that looks like the above
> query maybe?

Taking your second email into account, I came up with:

select distinct pt.type_fr
from person_to_event pte
inner join person_type using (id_person_type)
where id_person_type in (
select id_person_type
from person_to_event pte
inner join event using (id_event)
inner join event_type using (id_event_type)
where type_fr = 'theatre'
) and id_person_type not in (
select id_person_type
from person_to_event
where id_event = 219
)

I feel like there's a solution involving group by tugging at the back of
my mind, but I can't quite put my finger on it. Sorry if this isn't
quite what you're asking for.

-Mark


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] parallel pg_restore

Le lundi 22 septembre 2008, Joshua Drake a écrit :
> I will not argue vehemently here but I will say that "jobs" doesn't
> seem correct. The term "workers" seems more appropriate.

Mmmm, it sounds like it depends on the implementation (and how all workers
will share the same serializable transaction or just be independant jobs),
but my point here is more about giving the user a name they are used to.
Like in "oh, pg_restore -j, I see, thanks".

Now, if your argument is that the make concept of job does not match the
parallel pg_restore concept of workers, I'll simply bow to your choice:
baring other "limits", English not being my natural language makes it hard
for me to follow there ;)

Regards,
--
dim

Re: [HACKERS] parallel pg_restore

On Mon, 22 Sep 2008 17:24:28 +0100
Simon Riggs <simon@2ndQuadrant.com> wrote:

> > More importantly, I'm not convinced it's a good idea. It seems more
> > like a footgun that will potentially try to launch thousands of
> > simultaneous restore connections. I should have thought that
> > optimal performance would be reached at some small multiple (say
> > maybe 2?) of the number of CPUs on the server. You could achieve
> > unlimited parallelism by saying something like --jobs=99999, but
> > I'd rather that were done very explicitly instead of as the default
> > value of the parameter.
>
> OK, sounds best.
>

I will not argue vehemently here but I will say that "jobs" doesn't
seem correct. The term "workers" seems more appropriate.

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] displaying enum

Does anyone know if there's plans afoot to add this to 8.4 psql as a
\d command? Seems it could be useful. Heck, if there was just a bit
more data in the \dT output we'd be set.

On Mon, Sep 22, 2008 at 8:53 AM, <chirag.dave@gmail.com> wrote:
> Assuming you are are running 8.3, you can do this:
>
> SELECT a.typname,b.enumlabel from pg_type a , pg_enum b where
> a.oid=b.enumtypid and a.typname='NAME_OF_ENUM';
>
> Chirag Dave
> Afilias
>
>
> On Mon, Sep 22, 2008 at 9:20 AM, Jagadeesh <mnjagadeesh@gmail.com> wrote:
>>
>> Hi admins,
>>
>> I had created enum and forgotten what were the values. Is there any
>> way to see what I created?
>>
>> Thanks
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
>

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Initial prefetch performance testing

On Mon, 2008-09-22 at 16:46 +0100, Gregory Stark wrote:
> Simon Riggs <simon@2ndQuadrant.com> writes:
>
> > On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
> >
> >> -As Greg Stark suggested, the larger the spindle count the larger the
> >> speedup, and the larger the prefetch size that might make sense. His
> >> suggestion to model the user GUC as "effective_spindle_count" looks like a
> >> good one. The sequential scan fadvise implementation patch submitted uses
> >> the earlier preread_pages name for that parameter, which I agree seems
> >> less friendly.
> >
> > Good news about the testing.
> >
> > I'd prefer to set this as a tablespace level storage parameter.
>
> Sounds, like a good idea, except... what's a tablespace level storage parameter?

A storage parameter, just at tablespace level.

WITH (storage_parameter = value)

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] parallel pg_restore

On Mon, 2008-09-22 at 11:38 -0400, Andrew Dunstan wrote:
>
> Simon Riggs wrote:
> > On Mon, 2008-09-22 at 09:53 +0200, Dimitri Fontaine wrote:
> >
> >
> >>> My intention is to have single-thread restore remain the default, at
> >>> least for this go round, and have the user be able to choose
> >>> --multi-thread=nn to specify the number of concurrent connections to use.
> >>>
> >> What about the make famous -j option?
> >>
> >> -j [jobs], --jobs[=jobs]
> >> Specifies the number of jobs (commands) to run simultaneously. If
> >> there is more than one -j option, the last one is effective. If
> >> the -j option is given without an argument, make will not limit
> >> the number of jobs that can run simultaneously.
> >>
> >
> > +1
> >
> >
>
> If that's the preferred name I have no problem. I'm not sure about the
> default argument part, though.
>
> First, I'm not sure out getopt infrastructure actually provides for
> optional arguments, and I am not going to remove it in pg_restore to get
> around such a problem, at least now.
>
> More importantly, I'm not convinced it's a good idea. It seems more like
> a footgun that will potentially try to launch thousands of simultaneous
> restore connections. I should have thought that optimal performance
> would be reached at some small multiple (say maybe 2?) of the number of
> CPUs on the server. You could achieve unlimited parallelism by saying
> something like --jobs=99999, but I'd rather that were done very
> explicitly instead of as the default value of the parameter.

OK, sounds best.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pdxpug] Fwd: [ANNOUNCE] PostgreSQL 8.3.4, 8.2.10, etc. Update Release

---------- Forwarded message ----------
From: Josh Berkus <josh@postgresql.org>
Date: Mon, Sep 22, 2008 at 8:55 AM
Subject: [ANNOUNCE] PostgreSQL 8.3.4, 8.2.10, etc. Update Release
To: pgsql-announce@postgresql.org


Updates for all maintained versions of PostgreSQL are available today:
8.3.4, 8.2.10, 8.1.14, 8.0.18 and 7.4.22. These releases fix more
than thirty minor issues reported and patched over the last three
months. As some of these issues carry a risk of data loss,
administrators of production applications should upgrade in their next
possible scheduled downtime.

Additionally, this minor release also contains updates for the
following timezones to support changes in local laws: Argentina,
Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and
Paraguay. Users in these timezones should plan to upgrade as soon as
possible to avoid improper calculation of daylight savings time.

Issues fixed include autovacuum crashes reported by several users, two
Heap Only Tuple bugs, a foreign key failure condition, a too-small
lock address space, two Write Ahead Log bugs, several planner
mistakes, and numerous "corner condition" bugs. See the release notes
for more details.

As with other minor releases, users are not required to dump and
reload their database in order to apply this update release; you may
simply shut down PostgreSQL and update its binaries. Users skipping
more than one update may need to check the release notes for extra,
post-update steps. As previously announced, only versions 8.2.10 and
8.3.4 of the Windows binaries are being released, as we no longer
support 8.0 and 8.1 on Windows.

Release Notes
http://www.postgresql.org/docs/current/static/release.html
Source Code
http://www.postgresql.org/ftp/source
Binaries
http://www.postgresql.org/ftp/binary

---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

--
Selena Deckelmann
PDXPUG - http://pugs.postgresql.org/pdx
Me - http://www.chesnok.com/daily

--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

[ANNOUNCE] Version 2.1.4 of check_postgres.pl released

-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkjXwSQACgkQvJuQZxSWSsgCLwCfQqYxQ9YW0KOXkyAhGoSR3EU5
3E0AoIJZX0u8+iWffU7Cp3LD0h6u+hkm
=QI4Z
-----END PGP SIGNATURE-----
Version 2.1.4 of check_postgres.pl, a Postgres monitoring script, has been
released. This script checks on over 20 different metrics, from disk space
to relation bloat to number of WAL files, and can output in a Nagios or
MRTG friendly manner. The only requirement other than Perl is access to a
'psql' binary. Full documentation and download information can be found at:

http://bucardo.org/check_postgres/

--
Greg Sabino Mullane greg@endpoint.com
End Point Corporation

[ANNOUNCE] PostgreSQL 8.3.4, 8.2.10, etc. Update Release

Updates for all maintained versions of PostgreSQL are available today:
8.3.4, 8.2.10, 8.1.14, 8.0.18 and 7.4.22. These releases fix more than
thirty minor issues reported and patched over the last three months. As
some of these issues carry a risk of data loss, administrators of
production applications should upgrade in their next possible scheduled
downtime.

Additionally, this minor release also contains updates for the following
timezones to support changes in local laws: Argentina, Bahamas, Brazil,
Mauritius, Morocco, Pakistan, Palestine, and Paraguay. Users in these
timezones should plan to upgrade as soon as possible to avoid improper
calculation of daylight savings time.

Issues fixed include autovacuum crashes reported by several users, two
Heap Only Tuple bugs, a foreign key failure condition, a too-small lock
address space, two Write Ahead Log bugs, several planner mistakes, and
numerous "corner condition" bugs. See the release notes for more details.

As with other minor releases, users are not required to dump and reload
their database in order to apply this update release; you may simply
shut down PostgreSQL and update its binaries. Users skipping more than
one update may need to check the release notes for extra, post-update
steps. As previously announced, only versions 8.2.10 and 8.3.4 of the
Windows binaries are being released, as we no longer support 8.0 and 8.1
on Windows.

Release Notes
http://www.postgresql.org/docs/current/static/release.html
Source Code
http://www.postgresql.org/ftp/source
Binaries
http://www.postgresql.org/ftp/binary

---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org