Friday, September 19, 2008

[GENERAL] [OT] CSS Mailinglist?

Hello,

I am changeing my website from crappy HTML Tables to CSS :-D and need
some help but failed to find mailinglists for it.

Does someone from you know one?

Note: I can not use ANY news groups since my french GSM Provider
(Bouygues Telecom) consider it as streaming protocol and
disallow it.

Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)

Re: [GENERAL] psql scripting tutorials

Am 2008-09-11 10:03:03, schrieb Roderick A. Anderson:
> Whatever happened to pgbash? I see the last update was Feb 2003 but
> that was for Pg v7.3.

I have tried it soe times ago with 7.4 but goten to many errors...
...and gaved up.

Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)

Re: [pgus-general] Nominations for PgUS Board: Weekly update

On Fri, 19 Sep 2008 14:09:49 -0400
"Michael Alan Brewer" <mbrewer@gmail.com> wrote:

> On Fri, Sep 19, 2008 at 2:09 PM, Joshua Drake <jd@commandprompt.com>
> wrote:
> >
> > Thanks for the update Michael, do you have an ETA on when the
> > platforms will be up on the website?
>
> I was planning on putting them up after the close of nominations (so
> no candidate would have an advantage).

Oh, that seems reasonable.

Sincerely,

Joshua D. Drake


--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [HACKERS] gsoc, oprrest function for text search take 2

=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <j.urbanski@students.mimuw.edu.pl> writes:
> Attached is a version that stores the minimal and maximal frequencies in
> the Numbers array, has the aforementioned assertion and more nicely
> ordered functions in ts_selfuncs.c.

Applied with some small corrections.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Create a selectivity estimation function for the text search @@

Log Message:
-----------
Create a selectivity estimation function for the text search @@ operator.

Jan Urbanski

Modified Files:
--------------
pgsql/doc/src/sgml:
catalogs.sgml (r2.174 -> r2.175)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/catalogs.sgml?r1=2.174&r2=2.175)
pgsql/src/backend/tsearch:
Makefile (r1.7 -> r1.8)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/tsearch/Makefile?r1=1.7&r2=1.8)
ts_typanalyze.c (r1.1 -> r1.2)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/tsearch/ts_typanalyze.c?r1=1.1&r2=1.2)
pgsql/src/include/catalog:
catversion.h (r1.486 -> r1.487)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/catalog/catversion.h?r1=1.486&r2=1.487)
pg_operator.h (r1.162 -> r1.163)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/catalog/pg_operator.h?r1=1.162&r2=1.163)
pg_proc.h (r1.514 -> r1.515)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/catalog/pg_proc.h?r1=1.514&r2=1.515)
pg_statistic.h (r1.36 -> r1.37)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/catalog/pg_statistic.h?r1=1.36&r2=1.37)
pgsql/src/include/tsearch:
ts_type.h (r1.13 -> r1.14)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/tsearch/ts_type.h?r1=1.13&r2=1.14)

Added Files:
-----------
pgsql/src/backend/tsearch:
ts_selfuncs.c (r1.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/tsearch/ts_selfuncs.c?rev=1.1&content-type=text/x-cvsweb-markup)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] offtopic, about subject prefix

Am 2008-09-03 13:33:05, schrieb Fernando Moreno:
> Hello, I'm new to this mailing list, and I have a couple of questions:
>
> Is it really necessary to add the [GENERAL] prefix?

No it is not since the PostgreQL Lists can be filtered perfectly with:

----[ /usr/share/tdtools-procmail/ML_pgsql ]----------------------------
<snip>
:0
* ^X-Mailing-List:.*pgsql-\/[-a-zA-Z0-9].*
{
TMPVAR=${MATCH}
:0
.ML_pgsql.${TMPVAR}/
}
#---------------------------------------------------------------------
:0
* ^List-Post:.*mailto:[-.@a-zA-Z0-9]+>
* ^List-Post:.*mailto:\/[-.@a-zA-Z0-9]+
{
TMPVAR=`echo "${MATCH}" |tr '.' '_' |sed 'y|ABCDEFGHIJKLMNOPQRSTUVWXYZ|abcdefghijklmnopqrstuvwxyz|'`
:0
.ML_pgsql.${TMPVAR}/
}
<snip>
------------------------------------------------------------------------


> Are messages without this prefix likely to be ignored by automatic filters
> or something like that?


Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)

Re: [HACKERS] Proposal of SE-PostgreSQL patches (for CommitFest:Sep)

Robert Haas wrote:
>> It's too early to vote. :-)
>>
>> The second and third option have prerequisite.
>> The purpose of them is to match granularity of access controls
>> provided by SE-PostgreSQL and native PostgreSQL. However, I have
>> not seen a clear reason why these different security mechanisms
>> have to have same granuality in access controls.
>
> Have you seen a clear reason why they should NOT have the same granularity?

I don't deny two different security mechanisms have same granuality.
It is a choice by its architect, and it can have same or different
guranuality.

> I realize that SELinux has become quite popular and that a lot of
> people use it - but certainly not everyone. There might be some parts
> of the functionality that are not really severable, and if that is the
> case, fine. But I think there should be some consideration of which
> parts can be usefully exposed via SQL and which can't. If the parts
> that can be are independently useful, then I think they should be
> available, but ultimately that's a judgment call and people may come
> to different conclusions.

Yes, I also agree your opinion.

SE-PostgreSQL is designed to achieve several targets in same time:
- Collaboration with operating system
- Mandatoty access control
- Fine-grained access control

If someone want the last feature only, SE-PostgreSQL provides too much
for him. However, it is designed to replace security mechanism easily.

Have you heared the PGACE security framework?
It is designed by reference to LSM, and provides several hooks to invoke
security mechanism. It checks whether the given query is legal, or not.

In my second option, I will try to implement similar functionality which
provides "fine-grained-only" on PGACE security framework.
However, every security mechanism has horizontal relationship, not a hierarchy,
not a dependency. So, we can make progress in parallel.
(And, they can have individual guranuality and so on.)
Therefore, I think that nonexistence of "fine-grained-only" mechanism should
not block other mechanism to join development cycle.

If it is really really necessary, I try to pay effort to implement the 2nd
option due to the CommitFest:Nov.
But, in my ideal, I want to concentrate to merge SE-PostgreSQL during v8.4
development cycle.

And, I understood there are folks who want only "fine-grained-only" one.
If possible, I want to design and implement it for v8.5 development cycle
with enough days.
Unfortunatelly, remained days are a bit short...

Thanks,
--
KaiGai Kohei <kaigai@kaigai.gr.jp>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] PDF Documentation for 8.3?

Hello,

I am using Debian GNU/Linux Etch with PostgreSQL 8.1.11 and since the
next release of Debian will use 8.3 I am searching for documentation
which can be print out...

Ma last Printed version was "Practical PostgreSQL" from O'Reilly which
cover only 7.4.

I was searching the site but there are no PDF's for 8.3 in format A4 or
do I missing something?

Note: The american "Letter" format sucks, because I am printing
two A4 pages on ONE A4 side and with the "Letter" format
I get very huge borders...

Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)

Re: [HACKERS] 8.3.1 autovacuum stopped doing anything months ago

On Fri, Sep 19, 2008 at 11:42 AM, Robert Treat <xzilla@users.sourceforge.net> wrote:
On Friday 19 September 2008 00:23:34 Jeffrey Baker wrote:
> Anyway, I have some issues.  One, of course, is that the autovacuum should
> not have been deadlocked or otherwise stalled like that.  Perhaps it needs
> a watchdog of some kind.  Has anyone else experienced an issue like that in
> 8.3.1?  The only thing I can see in the release notes that indicates this
> problem may have been fixed is the following:
>

We have several checks in the check_postgres script which are in this area

Are you referring to the nagios plugin?  I already use it, and nagios didn't make a peep.  Perhaps I should check for a more recent revision.

-jwb
 

Re: [PERFORM] why does this use the wrong index?

On Fri, 2008-09-19 at 11:25 -0700, Jeff Davis wrote:
> What's the n_distinct for start_time?

Actually, I take that back. Apparently, PostgreSQL can't change "x
BETWEEN y AND y" into "x=y", so PostgreSQL can't use n_distinct at all.

That's your problem. If it's one day only, change it to equality and it
should be fine.

Regards,
Jeff Davis


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] 8.3.1 autovacuum stopped doing anything months ago

On Friday 19 September 2008 00:23:34 Jeffrey Baker wrote:
> Anyway, I have some issues. One, of course, is that the autovacuum should
> not have been deadlocked or otherwise stalled like that. Perhaps it needs
> a watchdog of some kind. Has anyone else experienced an issue like that in
> 8.3.1? The only thing I can see in the release notes that indicates this
> problem may have been fixed is the following:
>

We have several checks in the check_postgres script which are in this area
(warnings for approaching autovacuum freeze max age, warnings when approching
xid wrap, monitoring of tables analyze/vacuum activity) Those can at least
alert you to problems before they become too big a hassle.

> Secondly, there really does need to be an autovacuum=off,really,thanks so
> that my maintenance can proceed without competition for i/o resources. Is
> there any way to make that happen? Is my SIGSTOP idea dangerous?

If Heikis solution applies, it's better (see also vacuum_freeze_min_age) , but
if its too late for that, you can go into single user mode, which will
prevent autovacuum; it's a bit more heavy handed though.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCHES] libpq events patch (with sgml docs)

On Fri, Sep 19, 2008 at 2:14 PM, Andrew Chernow <ac@esilo.com> wrote:
>
> BTW, the event system might be an alternative solution for PGNoticeHooks
> (PGEVT_NOTICE).
>

Another possible use of the event hooks -- just spitballing here -- is
to generate an event when a notification comes through (you would
still receive events the old way., by command or PQconsumeInput).
Maybe this would eventually replace the current notification interface
(wasn't this going to be changed anyways?)

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [JDBC] Postgresql JDBC UTF8 Conversion Throughput

From: Kris Jurka <books@ejurka.com>
Date: September 19, 2008 12:29:45 AM PDT
To: Paul Lindner <lindner@inuus.com>
Subject: Re: Postgresql JDBC UTF8 Conversion Throughput



On Mon, 2 Jun 2008, Paul Lindner wrote:

It turns out the using more than two character sets in your Java
Application causes very poor throughput because of synchronization
overhead.  I wrote about this here:

http://paul.vox.com/library/post/the-mysteries-of-java-character-set-performance.html


Very interesting.

In Java 1.6 there's an easy way to fix this charset lookup problem.
Just create a static Charset for UTF-8 and pass that to getBytes(...)
instead of the string constant "UTF-8".

Note that this is actually a performance hit (when you aren't stuck doing charset lookups), see

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6633613

For backwards compatibility with Java 1.4 you can use the attached
patch instead.  It uses nio classes to do the UTF-8 to byte
conversion.


This is also a performance loser in the simple case.  The attached test case shows times of:

Doing 10000000 iterations of each.
2606 getBytes(String)
6200 getBytes(Charset)
3346 via ByteBuffer

It would be nice to fix the blocking problem, but it seems like a rather unusual situation to be in (being one charset over the two charset cache). If you've got more than three charsets in play then fixing the JDBC driver won't help you because at most it could eliminate one.  So I'd like the driver to be a good citizen, but I'm not convinced the performance hit is worth it without having some more field reports or benchmarks.

Maybe it depends how much reading vs writing is done.  Right now we have our own UTF8 decoder so this hit only happens when encoding data to send it to the DB.  If you're loading a lot of data this might be a problem, but if you're sending a small query with a couple of parameters, then perhaps the thread safety is more important.


Hi Kris,

getBytes(String) when using a constant string will always win.  StringCoding.java (see http://www.docjar.net/html/api/java/lang/StringCoding.java.html)  caches the charset locally.

When you use 2 or more character sets getBytes(Charset) and getBytes(String) single-thread performance are about the same with getBytes(String) slightly ahead.  ByteBuffer ends up being the big winner:

Doing 10000000 iterations of each for string - 'abcd1234'
15662 getBytes(Charset)
14958 getBytes(String)
10098 via ByteBuffer

In any case all of this only pertains to single thread performance.  Our web apps are running on 8 and 16 core systems where contention is the biggest performance killer.

Re: [HACKERS] Where to Host Project

On Friday 19 September 2008 14:05:36 David E. Wheeler wrote:
> On Sep 18, 2008, at 18:43, Robert Treat wrote:
> >> * Google Code
> >
> > does not offer mailing lists
>
> I get mail for the test-more project there. It's through Google
> Groups, which is a little weird, but works.
>

I didn't think there was any integration between those two services, but maybe
there is (ie. sign up for an account on google code and you have a google
groups login as well). Otherwise google groups can be considered a solution
for githubs lack of mailing lists as well. (incidentally, github has some
neat automated webhooks for its git repos, like automatically sending email
to a mailing list, or to a basecamp site, or dozens of other places. sure
this can be done with other services, but github makes it very easy)

> >> * LaunchPad
> >
> > does not offer svn or git, and i think they dont offer a home page
> > service
>
> It uses Bazaar. WTF is that? I've never heard of it.

it is another distributed version control system, similar to
git/monotone/etc... very popular in the mysql crowd (and i suppose gaining
more popularity in the ubuntu crowd as well)

> > Just for the record, you have overlooked SourceForge. While it
> > appears to
> > fallen out of favor with the open source crowd, it is the one
> > service that
> > does provide everything you wanted.
>
> Good point. I've not used it in years. Last time I looked the mail
> archives still sucked pretty hard. Otherwise, now that it has SVN, and
> if it has eliminated the performance problems, it might just do the
> trick.
>

Performance is nothing special, and its mail archive search interface is still
pretty crappy, but thats what local mail is for :-) I think the key to
sourceforge is its complete and it does work pretty well most of the time.

> > I've been saying for some time now we need to get out of the project
> > hosting
> > service, and get into the project directory service. What we really
> > want is
> > to make it easy for people to find postgresql related projects,
> > regardless of
> > where they are.
>
> That's an excellent idea. Do you have a plan for this?
>

We already have a product catalog on postgresql.org
http://www.postgresql.org/download/product-categories, so I think the plan
would be something like 1)no new projects on pgfoundry 2) announce 6 months
to move your project off of pgfoundry, and 3) shut it down. The downside is
this causes upheavel for projects currently on pgfoundry, breaks all kinds of
links, and generally leads to similar problems we had when we shut down
gborg, but it might be best in the long run.

Still, I dont think most people have bought into the idea that we shouldn't be
hosting projects anymore, so I haven't put much effort into this.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] why does this use the wrong index?

> So, What can I do to encourage Postgres to use the first index even when the
> date range is smaller.
>

It looks like PostgreSQL is estimating the selectivity of your date
ranges poorly. In the second (bad) plan it estimates that the index scan
with the filter will return 1 row (and that's probably because it
estimates that the date range you specify will match only one row).

This leads PostgreSQL to choose the narrower index because, if the index
scan is only going to return one row anyway, it might as well scan the
smaller index.

What's the n_distinct for start_time?

=> select n_distinct from pg_stats where tablename='ad_log' and
attname='start_time';

If n_distinct is near -1, that would explain why it thinks that it will
only get one result.

Based on the difference between the good index scan (takes 0.056ms per
loop) and the bad index scan with the filter (311ms per loop), the
"player" condition must be very selective, but PostgreSQL doesn't care
because it already thinks that the date range is selective.

Regards,
Jeff Davis


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [ODBC] compiling odbc

I got export PATH=$PATH:/usr/local/pgsql to work

but now I get:
"configure: error: unixODBC library "odbcinst" not found"

I found that "odbcinst" is in my /opt/local/bin

How do I tell configure that pgsql is in /usr/local/pgsql and ALSO tell it that obcinst is in /opt/local/bin ???




----- Original Message ----
From: Jeremy Faith <jfaith@cemsys.com>
To: pgsql-odbc@postgresql.org
Sent: Friday, September 19, 2008 6:35:23 AM
Subject: Re: [ODBC] compiling odbc

Hi,

Brent Austin wrote:
> My pg_config file is in:  /usr/local/pgsql/bin  so how do I
> tell terminal to see that it's there instead of where it thinks it is?
>
> would this fix it:
>
> export PATH=$PATH:/usr/local/psql

Could this be as simple as the 'g' being missing from the pgsql directory in the PATH?
i.e.
  export PATH=$PATH:/usr/local/psql
should be
  export PATH=$PATH:/usr/local/pgsql
and
  export PG_CONFIG=/usr/local/psql/bin/pg_config
should be
  export PG_CONFIG=/usr/local/pgsql/bin/pg_config

Regards,
Jeremy Faith


Albe Laurenz wrote:
>> So far this is what I get when I try that command-
>>
>> client-66-1xx-17-xx4:~ brent1a$ export PATH=$PATH:/usr/local/psql/bin
>>
>> client-66-1xx-17-xx4:~ brent1a$ cd /psqlodbc-08.03.0200
>>
>> client-66-1xx-17-xx4:psqlodbc-08.03.0200 brent1a$ sudo ./configure
>>
>>   
> [...]

>> configure: error: pg_config not found (set PG_CONFIG environment variable)
>>   
>
> Yeah, sure.
>
> You should omit the "sudo", it's wrong.
>
> But maybe it's best to follow the instructions you get and
> export PG_CONFIG=/usr/local/psql/bin/pg_config
>
> Yours,
> Laurenz Albe
>



--
Sent via pgsql-odbc mailing list (pgsql-odbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-odbc

Re: [pgadmin-hackers] pgadmin and cmake

use ccmake instead of cmake (or use the GUI) - the config names it
displays can be used with -D on the cmake command line.

On 9/19/08, Magnus Hagander <magnus@hagander.net> wrote:
> Guillaume Lelarge wrote:
>> Magnus Hagander a écrit :
>>> Guillaume Lelarge wrote:
>>>> Magnus Hagander a écrit :
>>>>> [...]
>>>>> A super-quick primer to get going. First of all, cmake "prefers"
>>>>> building outside the source directory, so here's a typical way to do it
>>>>> (assuming your pgadmin directory is "pgadmin3"):
>>>>>
>>>>> mkdir ../pgadmin3-build
>>>>> cd ../pgadmin3-build
>>>>> cmake -D CMAKE_INSTALL_PREFIX=/tmp/pgadmin_test_install ../pgadmin3
>>>>> make
>>>>>
>>>> I tried the cmake command on a Kubuntu 8.04 (the one I also use for
>>>> pgAdmin's development). I had a few error messages (see the attached
>>>> file). I don't really know what this all means. Perhaps you do know ?
>>>>
>>> Strange. I had "svn add":ed the directory "cmake", but it didn't get
>>> included in the commit. "svn commit" would do not commit it. "svn add"
>>> said it was already added...
>>>
>>> I removed the whole thing and re-committed, please try again.
>>>
>>
>> I had an issue, wx being not available. My wx build is in /opt/wx-2.8. I
>> use the --with-wx option with configure. Is there a same switch with cmake
>> ?
>
> Nope.
>
> Hmm, not entirely sure since that find wx module is from Dave ;-). But I
> think you can either:
>
> 1) Add the directory of wx-config to your PATH before you run it
> 2) I think you can try to add -DCMAKE_PROGRAM_PATH=/opt/wx-2.8/bin (or
> wherever wx-config is) to the commandline
>
>
> //Magnus
>
> --
> Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgadmin-hackers
>


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgadmin-hackers] pgadmin and cmake

Guillaume Lelarge wrote:
> Magnus Hagander a écrit :
>> Guillaume Lelarge wrote:
>>> Magnus Hagander a écrit :
>>>> [...]
>>>> A super-quick primer to get going. First of all, cmake "prefers"
>>>> building outside the source directory, so here's a typical way to do it
>>>> (assuming your pgadmin directory is "pgadmin3"):
>>>>
>>>> mkdir ../pgadmin3-build
>>>> cd ../pgadmin3-build
>>>> cmake -D CMAKE_INSTALL_PREFIX=/tmp/pgadmin_test_install ../pgadmin3
>>>> make
>>>>
>>> I tried the cmake command on a Kubuntu 8.04 (the one I also use for
>>> pgAdmin's development). I had a few error messages (see the attached
>>> file). I don't really know what this all means. Perhaps you do know ?
>>>
>> Strange. I had "svn add":ed the directory "cmake", but it didn't get
>> included in the commit. "svn commit" would do not commit it. "svn add"
>> said it was already added...
>>
>> I removed the whole thing and re-committed, please try again.
>>
>
> I had an issue, wx being not available. My wx build is in /opt/wx-2.8. I
> use the --with-wx option with configure. Is there a same switch with cmake ?

Nope.

Hmm, not entirely sure since that find wx module is from Dave ;-). But I
think you can either:

1) Add the directory of wx-config to your PATH before you run it
2) I think you can try to add -DCMAKE_PROGRAM_PATH=/opt/wx-2.8/bin (or
wherever wx-config is) to the commandline


//Magnus

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgsql-es-ayuda] Organizacion del PSDP-es OT

No veo inconvenientes en usar ese hosting, aunque si es absurdo usar mysql.
--
TIP 1: para suscribirte y desuscribirte, visita http://archives.postgresql.org/pgsql-es-ayuda

Re: [HACKERS] [PATCHES] libpq events patch (with sgml docs)

Tom Lane wrote:
>
> I'll go ahead and apply this patch in a little bit, but if you concur
> with the above reasoning, please put together a followon patch to add
> such a function.
>
> regards, tom lane
>
>

I attached a patch. You have to copy the events in PQmakeEmptyPGResult
because there is no where else to do this, other than copyresult but
that is different because it copies from a result not a conn.

PQmakeEmptyPGResult - must copy events here
PQsetResultAttrs - set attributes
PQsetvalue - set tuple values
PQfireResultCreateEvents(conn,res) - now fire resultcreate event

PQgetResult now calls PQfireResultCreateEvents.

BTW, the event system might be an alternative solution for PGNoticeHooks
(PGEVT_NOTICE).

--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

Re: [pgadmin-hackers] pgadmin and cmake

Magnus Hagander a écrit :
> Guillaume Lelarge wrote:
>> Magnus Hagander a écrit :
>>> [...]
>>> A super-quick primer to get going. First of all, cmake "prefers"
>>> building outside the source directory, so here's a typical way to do it
>>> (assuming your pgadmin directory is "pgadmin3"):
>>>
>>> mkdir ../pgadmin3-build
>>> cd ../pgadmin3-build
>>> cmake -D CMAKE_INSTALL_PREFIX=/tmp/pgadmin_test_install ../pgadmin3
>>> make
>>>
>> I tried the cmake command on a Kubuntu 8.04 (the one I also use for
>> pgAdmin's development). I had a few error messages (see the attached
>> file). I don't really know what this all means. Perhaps you do know ?
>>
>
> Strange. I had "svn add":ed the directory "cmake", but it didn't get
> included in the commit. "svn commit" would do not commit it. "svn add"
> said it was already added...
>
> I removed the whole thing and re-committed, please try again.
>

I had an issue, wx being not available. My wx build is in /opt/wx-2.8. I
use the --with-wx option with configure. Is there a same switch with cmake ?


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [HACKERS] Where to Host Project

On Fri, 19 Sep 2008 11:05:36 -0700
"David E. Wheeler" <david@kineticode.com> wrote:

> >> * LaunchPad
> >
> > does not offer svn or git, and i think they dont offer a home page
> > service
>
> It uses Bazaar. WTF is that? I've never heard of it.

Another git/mecurial/monotone style SCM. It does however allow
interaction with things like remote git and svn repos :)

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgus-general] Nominations for PgUS Board: Weekly update

On Fri, Sep 19, 2008 at 2:09 PM, Joshua Drake <jd@commandprompt.com> wrote:
>
> Thanks for the update Michael, do you have an ETA on when the platforms
> will be up on the website?

I was planning on putting them up after the close of nominations (so
no candidate would have an advantage).

---Michael Brewer
mbrewer@gmail.com

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [pgus-general] Nominations for PgUS Board: Weekly update

On Fri, 19 Sep 2008 12:55:02 -0400
"Michael Alan Brewer" <mbrewer@gmail.com> wrote:

> Greetings, y'all! This is the weekly update on the state of
> nominations for the PgUS board.

Thanks for the update Michael, do you have an ETA on when the platforms
will be up on the website?

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [HACKERS] Where to Host Project

On Sep 19, 2008, at 01:25, Dimitri Fontaine wrote:

> There's a french non-profit team offering those:
> http://tuxfamily.org/en/main
>
> You can even take their open source hosting facility software and
> offer your
> own services based on it, and/or extend their perl code to add new
> features.
> I tried to talk pgfoundry admins into this solution in the past, but I
> understand maintaining pgfoundry is a PITA.

Looks pretty interesting. I've never heard of it. Anyone else have
experience with it?

Thanks,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Where to Host Project

On Sep 18, 2008, at 19:01, Alvaro Herrera wrote:

> Why not host the code on (say) GitHub, and the rest of the stuff on
> pgFoundry?

That's kind of what I'm doing now. But I'm wondering if I should
bother with pgFoundry at all. It seems pretty dead (see Josh Berkus's
reply).

Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Where to Host Project

On Sep 18, 2008, at 18:43, Robert Treat wrote:

>> * Google Code
>
> does not offer mailing lists

I get mail for the test-more project there. It's through Google
Groups, which is a little weird, but works.

>> * LaunchPad
>
> does not offer svn or git, and i think they dont offer a home page
> service

It uses Bazaar. WTF is that? I've never heard of it.

>> * WebFaction
>
> dont really know anything about these guys, but i thought they did web
> hosting, not project hosting.

Yeah, looks that way.

> Just for the record, you have overlooked SourceForge. While it
> appears to
> fallen out of favor with the open source crowd, it is the one
> service that
> does provide everything you wanted.

Good point. I've not used it in years. Last time I looked the mail
archives still sucked pretty hard. Otherwise, now that it has SVN, and
if it has eliminated the performance problems, it might just do the
trick.

> I've been saying for some time now we need to get out of the project
> hosting
> service, and get into the project directory service. What we really
> want is
> to make it easy for people to find postgresql related projects,
> regardless of
> where they are.

That's an excellent idea. Do you have a plan for this?

Thanks,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Oracle and Postgresql

Am 2008-09-15 10:12:08, schrieb Joshua Drake:
> Are we going to start a VI vs Emacs argument too?

They are out of concurence since I am using mc (Midnight Commander). :-P

Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)

Re: [HACKERS] Assert Levels

greg

On 19 Sep 2008, at 13:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Simon Riggs <simon@2ndQuadrant.com> writes:
>> Can we introduce levels of assertion?
>
> The thing that is good about Assert() is that it doesn't require a lot
> of programmer effort to put one in. I'm not in favor of complexifying
> it.
>

Perhaps just an Assert_expensive() would be useful if someone wants to
to the work of going through all the assertions and determining if
they're especially expensive. We already have stuff like CLOBBER*.

You'll also have to do enough empirical tests to convince people that
a --enable-cheap-casserts build really does perform the same as a
regular build.


> regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal of SE-PostgreSQL patches (for CommitFest:Sep)

> It's too early to vote. :-)
>
> The second and third option have prerequisite.
> The purpose of them is to match granularity of access controls
> provided by SE-PostgreSQL and native PostgreSQL. However, I have
> not seen a clear reason why these different security mechanisms
> have to have same granuality in access controls.

Have you seen a clear reason why they should NOT have the same granularity?

I realize that SELinux has become quite popular and that a lot of
people use it - but certainly not everyone. There might be some parts
of the functionality that are not really severable, and if that is the
case, fine. But I think there should be some consideration of which
parts can be usefully exposed via SQL and which can't. If the parts
that can be are independently useful, then I think they should be
available, but ultimately that's a judgment call and people may come
to different conclusions.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgadmin-hackers] pgadmin and cmake

Guillaume Lelarge wrote:
> Magnus Hagander a écrit :
>> [...]
>> A super-quick primer to get going. First of all, cmake "prefers"
>> building outside the source directory, so here's a typical way to do it
>> (assuming your pgadmin directory is "pgadmin3"):
>>
>> mkdir ../pgadmin3-build
>> cd ../pgadmin3-build
>> cmake -D CMAKE_INSTALL_PREFIX=/tmp/pgadmin_test_install ../pgadmin3
>> make
>>
>
> I tried the cmake command on a Kubuntu 8.04 (the one I also use for
> pgAdmin's development). I had a few error messages (see the attached
> file). I don't really know what this all means. Perhaps you do know ?
>

Strange. I had "svn add":ed the directory "cmake", but it didn't get
included in the commit. "svn commit" would do not commit it. "svn add"
said it was already added...

I removed the whole thing and re-committed, please try again.

//Magnus

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgadmin-hackers] SVN Commit by mha: r7486 - in trunk/pgadmin3: . cmake

Author: mha

Date: 2008-09-19 18:37:05 +0100 (Fri, 19 Sep 2008)

New Revision: 7486

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7486&view=rev

Log:
For some reason, previous commit refused to include this directory.

Adds required modules for cmake build.

Added:
trunk/pgadmin3/cmake/
trunk/pgadmin3/cmake/FindPG.cmake
trunk/pgadmin3/cmake/FindWX.cmake

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [pgsql-www] Download strategy

Dave Page wrote:
> I must say I'm a little disappointed about the current discussion on
> how the downloads are currently organised. The current layout was
> discussed with numerous members of the webteam, both on and off-list
> before it was implemented, and was done so based on feedback from
> users and third parties who were able to provide useful hints through
> their own dealings with users and potential users.
>
> The original download area was confusing. We had links on the
> homepages that pointed to source code and windows binaries. We had
> multiple pages linking to related projects, and we had a download page
> that linked into parts of our FTP site, as well as a largely unmanaged
> list of third party sites. We regularly received emails asking
> where/what people needed to download.
>
I agree that the current page is better. I (mistakenly) thought that
the information form (the one that was/is not required) was presented on
downloads of the packages listed in the community section. I'm sorry to
have caused so much strife...

I do believe that the current strategy was better, and with your recent
clarification on upgrades/maintenace of the one-click installers, I'm in
favor of pushing them. I wasn't aware of Josh's (Berkus) reasoning with
regard to getting more "non core" stuff in the install, and I think that
makes sense as well.

I personally am not a fan of the commercial distributions being anything
more than a simple download (i.e., I think that the form, though not
required, isn't a good thing - from the community perspective), but I
think I'm in the minority, so I'll shut up about it.

In defense of myself, I don't think I ever intimated that any form
asking for information is/was required....

Dave, on a more personal note. I applaud your initiative and all the
work I am sure that it took to make sure everything was seamless and
integrated as a whole. I think that it does make stuff easier to find,
and will - overall- provide the community with a better experience. The
applause goes to anyone that helped as well :-)

thanks
> The revised strategy included a number of ideas to improve matters:
>
> - *All* external download links should point to /download, except
> where intentionally pointing to a specific package.
>
> - Browsing of the FTP area should be a last resort for the user, never
> something we direct them to do.
>
> - All third-party products and add-ons etc. should be moved into the
> new software catalogue.
>
> - All third party 'non-community-standard' PostgreSQL distributions
> (e.g. Postgres Plus, BitNami, Bizgres) would be moved to a secondary
> list under the main server downloads.
>
> - 'Community standard' PostgreSQL distributions would be given
> top-most listing on the download page, categorised by operating
> system. These packages come from postgresql.org and a variety of third
> party sites.
>
> - Within each operating system category, downloads would be listed in
> order of ease of use for the complete novice and then alphabetically.
> This is because it was perceived that the majority of 'what do I
> download' questions came from the real novices, for whom a one-click
> installer is easier to understand than a long list of RPMs, DEBs or
> ports, most of which they won't need. The more experienced users will
> naturally choose the platform-native packages anyway, as that's what
> they will be looking for.
>
> And guess what? It's worked. *All* the feedback I've received has
> commented on how it's far, far easier to find the appropriate
> downloads now, and since the changes were implemented, I don't think
> I've seen a single 'what/where do I download' email.
>
>


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC 27560
919-463-0999/877-258-8987
http://www.otg-nc.com


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgadmin-hackers] pgadmin and cmake

Magnus Hagander a écrit :
> [...]
> A super-quick primer to get going. First of all, cmake "prefers"
> building outside the source directory, so here's a typical way to do it
> (assuming your pgadmin directory is "pgadmin3"):
>
> mkdir ../pgadmin3-build
> cd ../pgadmin3-build
> cmake -D CMAKE_INSTALL_PREFIX=/tmp/pgadmin_test_install ../pgadmin3
> make
>

I tried the cmake command on a Kubuntu 8.04 (the one I also use for
pgAdmin's development). I had a few error messages (see the attached
file). I don't really know what this all means. Perhaps you do know ?


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

Re: [pgadmin-hackers] Dialogue issue

Dave Page a écrit :
> On Thu, Sep 18, 2008 at 11:02 PM, Guillaume Lelarge
> <guillaume@lelarge.info> wrote:
>>> Certainly better - I think it perhaps needs the same spacing added
>>> again though? What do you think?
>>>
>> Yes, this is much better. See attached patch.
>
> Yup - I've tweaked it a little more (put the checkbox under the
> password box) and committed. Feel free to tweak some more if you don't
> like what I did :-)
>


It seems good to me.

Thanks.


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[pgus-general] Nominations for PgUS Board: Weekly update

Greetings, y'all! This is the weekly update on the state of
nominations for the PgUS board.

Currently, we have five nominations for the four open seats; the
following people have accepted their nominations:

# # # # # # # #

Richard Broersma, Jr. richard.broersma@gmail.com

Andrew Dunstan andrew@dunslane.net

Ned Lilly ned@xtuple.com

Greg Subino Mullane greg@endpoint.com

Robert Treat xzilla@users.sourceforge.net

# # # # # # # #

Remember, there's still time to get your nominations in; please
submit nominations (of yourself, or others) to:

secretary@postgresql.us

I will contact the nominees (to see if they accept the nomination) and
report weekly to pgus-general the list of current nominees.

Nominations will close on September 30th.

Also, remember: You can now use the following URL to become a member
of the United States PostgreSQL Association (PgUS):

https://www.postgresql.us/join

Note the special combo rate for PgUS professional membership and
PostgreSQL Conference West (October 10-12) registration; you can view
a partial list of the West talks here:

http://www.postgresqlconference.org/west08/talks/

Thanks, everyone!

---Michael Brewer
Secretary, PgUS
mbrewer@gmail.com
secretary@postgresql.us

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [NOVICE] Moving data from one set of tables to another?

On Fri, Sep 19, 2008 at 12:04 PM, Howard Eglowstein
<howard@yankeescientific.com> wrote:
> Yes, I have been deleting them as I go. I thought about running one pass to
> move the data over and a second one to then delete the records. The data in
> two of the tables is only loosely linked to the data in the first by the
> value in one column, and I was concerned about how to know where to restart
> the process if it stopped and I had to restart it later. Deleting the three
> rows after the database reported successfully writing the three new ones
> seemed like a good idea at the time. I don't want to stop the process now,
> but I'll look at having the program keep track of its progress and then go
> back and delete the old data when it's done.
>
> And yes, I do have a complete backup of the data from before I started any
> of this. I can easily go back to where I was and try again or tweak the
> process as needed. The database tuning is a problem I think we have from
> before this procedure and I'll have to look at again after this data is
> moved around.

You might want to do all of this--inserts and deletes--within a
transaction. Then, if ANY step fails, the entire process can be
rolled back.

> Carol Walter wrote:
>>
>> Database tuning can really be an issue. I have a development copy and a
>> production copy of most of my databases. They are on two different
>> machines. The databases used to tuned the same way, however one of the
>> machines has more processing power and one has more memory. When we retuned
>> the databases to take advantage of the machines strong points, it decreased
>> the time it took to run some queries by 400%.
>>
>> Carol
>>
>> P.S. If I understand your process, and your deleting the records as you
>> go, that would make me really nervous. As soon as you start, you no longer
>> have an intact table that has all the data in it. While modern databases
>> engines do a lot to protect your data, there is always some quirk that can
>> happen. If you have enough space, you might consider running the delete
>> after the tables are created.
>>
>>
>> On Sep 19, 2008, at 11:07 AM, Howard Eglowstein wrote:
>>
>>> There are a lot of issues at work here. The speed of the machine, the
>>> rest of the machine's workload, the database configuration, etc. This
>>> machine is about 3 years old and not as fast as a test machine I have at my
>>> desk. It's also running three web services and accepting new data into the
>>> current year's tables at the rate of one set of rows every few seconds. The
>>> database when I started didn't have any indices applied. I indexed a few
>>> columns which seemed to help tremendously (a factor of 10 at least) and
>>> perhaps a few more might help.
>>>
>>> Considering that searching the tables now with the data split into 3 rows
>>> takes a minute or more to search the whole database, I suspect that there's
>>> still organizational issues that could be addressed to speed up all PG
>>> operations. I'm far more concerned with robustness and I'm not too keen on
>>> trying too many experiments until I get the data broken up and backed up
>>> again.
>>>
>>> I doubt this machine could perform 7 SQL operations on 1.5 million rows
>>> in each of 3 tables in a few seconds or minutes on a good day, with the
>>> wind, rolling down hill. I'd like to be proven wrong though...
>>>
>>> Howard
>>>
>>> Sean Davis wrote:
>>>>
>>>> On Fri, Sep 19, 2008 at 10:48 AM, Howard Eglowstein
>>>> <howard@yankeescientific.com> wrote:
>>>>
>>>>> Absolutely true, and if the data weren't stored on the same machine
>>>>> which is
>>>>> running the client, I would have worked harder to combine statements.
>>>>> In
>>>>> this case though, the server and the data are on the same machine and
>>>>> the
>>>>> client application doing the SELECT, INSERT and DELETEs is also on the
>>>>> same
>>>>> machine.
>>>>>
>>>>> I'd like to see how to have done this with combined statements if I
>>>>> ever
>>>>> have to do it again in a different setup, but it is working well now.
>>>>> It's
>>>>> moved about 1/2 million records so far since last night.
>>>>>
>>>>
>>>> So the 150ms was per row? Not to belabor the point, but I have done
>>>> this with tables with tens-of-millions of rows in the space of seconds
>>>> to minutes (for the entire move, not per row), depending on the exact
>>>> details of the table(s). No overnight involved. The network is one
>>>> issue (which you have avoided by being local), but the encoding and
>>>> decoding overhead to go to a client is another one that is entirely
>>>> avoided. When you have some free time, do benchmark, as I think the
>>>> difference could be substantial.
>>>>
>>>> Sean
>>>>
>>>>
>>>>> Sean Davis wrote:
>>>>>
>>>>>> On Fri, Sep 19, 2008 at 7:42 AM, Howard Eglowstein
>>>>>> <howard@yankeescientific.com> wrote:
>>>>>>
>>>>>>
>>>>>>> So you'd agree then that I'll need 7 SQL statements but that I could
>>>>>>> stack
>>>>>>> the INSERT and the first SELECT if I wanted to? Cool. That's what I
>>>>>>> ended
>>>>>>> up
>>>>>>> with in C code and it's working pretty well. I did some indexing on
>>>>>>> the
>>>>>>> database and got the whole transaction down to about 150ms for the
>>>>>>> sequence.
>>>>>>> I guess that's as good as it's going to get.
>>>>>>>
>>>>>>>
>>>>>> Keep in mind that the INSERT [...] SELECT [...] is done server-side,
>>>>>> so the data never goes over the wire to the client. This is very
>>>>>> different than doing the select, accumulating the data, and then doing
>>>>>> the insert and is likely to be much faster, relatively. 150ms is
>>>>>> already pretty fast, but the principle of doing as much on the server
>>>>>> as possible is an important one when looking for efficiency,
>>>>>> especially when data sizes are large.
>>>>>>
>>>>>> Glad to hear that it is working.
>>>>>>
>>>>>> Sean
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Sean Davis wrote:
>>>>>>>
>>>>>>>
>>>>>>>> On Thu, Sep 18, 2008 at 7:28 PM, Howard Eglowstein
>>>>>>>> <howard@yankeescientific.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> What confuses me is that I need to do the one select with all three
>>>>>>>>> tables
>>>>>>>>> and then do three inserts, no? The results is that the 150 fields I
>>>>>>>>> get
>>>>>>>>> back
>>>>>>>>> from the select have to be split into 3 groups of 50 fields each
>>>>>>>>> and
>>>>>>>>> then
>>>>>>>>> written into three tables.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>> You do the insert part of the command three times, once for each new
>>>>>>>> table, so three separate SQL statements. The select remains
>>>>>>>> basically
>>>>>>>> the same for all three, with only the column selection changing
>>>>>>>> (data_a.* when inserting into new_a, data_b.* when inserting into
>>>>>>>> new_b, etc.). Just leave the ids the same as in the first set of
>>>>>>>> tables. There isn't a need to change them in nearly every case. If
>>>>>>>> you need to add a new ID column, you can do that as a serial column
>>>>>>>> in
>>>>>>>> the new tables, but I would stick to the original IDs, if possible.
>>>>>>>>
>>>>>>>> Sean
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> What you're suggesting is that there is some statement which could
>>>>>>>>> do
>>>>>>>>> the
>>>>>>>>> select and the three inserts at once?
>>>>>>>>>
>>>>>>>>> Howard
>>>>>>>>>
>>>>>>>>> Sean Davis wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> You might want to look at insert into ... select ...
>>>>>>>>>>
>>>>>>>>>> You should be able to do this with 1 query per new table (+ the
>>>>>>>>>> deletes, obviously). For a few thousand records, I would expect
>>>>>>>>>> that
>>>>>>>>>> the entire process might take a few seconds.
>>>>>>>>>>
>>>>>>>>>> Sean
>>>>>>>>>>
>>>>>>>>>> On Thu, Sep 18, 2008 at 6:39 PM, Howard Eglowstein
>>>>>>>>>> <howard@yankeescientific.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Somewhat empty, yes. The single set of 'data_' tables contains 3
>>>>>>>>>>> years
>>>>>>>>>>> worth
>>>>>>>>>>> of data. I want to move 2 years worth out into the 'new_' tables.
>>>>>>>>>>> When
>>>>>>>>>>> I'm
>>>>>>>>>>> done, there will still be 1 year's worth of data left in the
>>>>>>>>>>> original
>>>>>>>>>>> table.
>>>>>>>>>>>
>>>>>>>>>>> Howard
>>>>>>>>>>>
>>>>>>>>>>> Carol Walter wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> What do you want for your end product? Are the old tables empty
>>>>>>>>>>>> after
>>>>>>>>>>>> you
>>>>>>>>>>>> put the data into the new tables?
>>>>>>>>>>>>
>>>>>>>>>>>> Carol
>>>>>>>>>>>>
>>>>>>>>>>>> On Sep 18, 2008, at 3:02 PM, Howard Eglowstein wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> I have three tables called 'data_a', 'data_b' and 'data_c'
>>>>>>>>>>>>> which
>>>>>>>>>>>>> each
>>>>>>>>>>>>> have 50 columns. One of the columns in each is 'id' and is used
>>>>>>>>>>>>> to
>>>>>>>>>>>>> keep
>>>>>>>>>>>>> track of which data in data_b and data_c corresponds to a row
>>>>>>>>>>>>> in
>>>>>>>>>>>>> data_a. If
>>>>>>>>>>>>> I want to get all of the data in all 150 fields for this month
>>>>>>>>>>>>> (for
>>>>>>>>>>>>> example), I can get it with:
>>>>>>>>>>>>>
>>>>>>>>>>>>> select * from (data_a, data_b, data_c) where
>>>>>>>>>>>>> data_a.id=data_b.id
>>>>>>>>>>>>> AND
>>>>>>>>>>>>> data_a.id = data_c.id AND timestamp >= '2008-09-01 00:00:00'
>>>>>>>>>>>>> and
>>>>>>>>>>>>> timestamp
>>>>>>>>>>>>> <= '2008-09-30 23:59:59'
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>>> What I need to do is execute this search which might return
>>>>>>>>>>>>> several
>>>>>>>>>>>>> thousand rows and write the same structure into 'new_a',
>>>>>>>>>>>>> 'new_b'
>>>>>>>>>>>>> and
>>>>>>>>>>>>> 'new_c'. What i'm doing now in a C program is executing the
>>>>>>>>>>>>> search
>>>>>>>>>>>>> above.
>>>>>>>>>>>>> Then I execute:
>>>>>>>>>>>>>
>>>>>>>>>>>>> INSERT INTO data_a (timestamp, field1, field2 ...[imagine 50 of
>>>>>>>>>>>>> them])
>>>>>>>>>>>>> VALUES ('2008-09-01 00:00:00', 'ABC', 'DEF', ...);
>>>>>>>>>>>>> Get the ID that was assigned to this row since 'id' is a serial
>>>>>>>>>>>>> field
>>>>>>>>>>>>> and
>>>>>>>>>>>>> the number is assigned sequentially. Say it comes back as '1'.
>>>>>>>>>>>>> INSERT INTO data_b (id, field1, field2 ...[imagine 50 of them])
>>>>>>>>>>>>> VALUES
>>>>>>>>>>>>> ('1', 'ABC', 'DEF', ...);
>>>>>>>>>>>>> INSERT INTO data_c (id, field1, field2 ...[imagine 50 of them])
>>>>>>>>>>>>> VALUES
>>>>>>>>>>>>> ('1', 'ABC', 'DEF', ...);
>>>>>>>>>>>>>
>>>>>>>>>>>>> That moves a copy of the three rows of data form the three
>>>>>>>>>>>>> tables
>>>>>>>>>>>>> into
>>>>>>>>>>>>> the three separate new tables.
>>>>>>>>>>>>> From the original group of tables, the id for these rows was,
>>>>>>>>>>>>> let's
>>>>>>>>>>>>> say,
>>>>>>>>>>>>> '1234'. Then I execute:
>>>>>>>>>>>>>
>>>>>>>>>>>>> DELETE FROM data_a where id='1234';
>>>>>>>>>>>>> DELETE FROM data_b where id='1234';
>>>>>>>>>>>>> DELETE FROM data_c where id='1234';
>>>>>>>>>>>>>
>>>>>>>>>>>>> That deletes the old data.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This works fine and gives me exactly what I wanted, but is
>>>>>>>>>>>>> there a
>>>>>>>>>>>>> better
>>>>>>>>>>>>> way? This is 7 SQL calls and it takes about 3 seconds per moved
>>>>>>>>>>>>> record
>>>>>>>>>>>>> on
>>>>>>>>>>>>> our Linux box.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Any thoughts or suggestions would be appreciated.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Howard
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Sent via pgsql-novice mailing list
>>>>>>>>>>>>> (pgsql-novice@postgresql.org)
>>>>>>>>>>>>> To make changes to your subscription:
>>>>>>>>>>>>> http://www.postgresql.org/mailpref/pgsql-novice
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> No virus found in this incoming message.
>>>>>>>>>>>> Checked by AVG - http://www.avg.com Version: 8.0.169 / Virus
>>>>>>>>>>>> Database:
>>>>>>>>>>>> 270.6.21/1678 - Release Date: 9/18/2008 9:01 AM
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
>>>>>>>>>>> To make changes to your subscription:
>>>>>>>>>>> http://www.postgresql.org/mailpref/pgsql-novice
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> No virus found in this incoming message.
>>>>>>>>>> Checked by AVG - http://www.avg.com Version: 8.0.169 / Virus
>>>>>>>>>> Database:
>>>>>>>>>> 270.6.21/1678 - Release Date: 9/18/2008 9:01 AM
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>
>>>>>>>>
>>>>>>>> No virus found in this incoming message.
>>>>>>>> Checked by AVG - http://www.avg.com Version: 8.0.169 / Virus
>>>>>>>> Database:
>>>>>>>> 270.7.0/1679 - Release Date: 9/18/2008 5:03 PM
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>> No virus found in this incoming message.
>>>>>> Checked by AVG - http://www.avg.com Version: 8.0.169 / Virus Database:
>>>>>> 270.7.0/1679 - Release Date: 9/18/2008 5:03 PM
>>>>>>
>>>>>>
>>>>>>
>>>> >
>>>> ------------------------------------------------------------------------
>>>>
>>>>
>>>> No virus found in this incoming message.
>>>> Checked by AVG - http://www.avg.com Version: 8.0.169 / Virus Database:
>>>> 270.7.0/1679 - Release Date: 9/18/2008 5:03 PM
>>>>
>>>>
>>>
>>>
>>> --
>>> Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-novice
>>
>> ------------------------------------------------------------------------
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - http://www.avg.com Version: 8.0.169 / Virus Database:
>> 270.7.0/1679 - Release Date: 9/18/2008 5:03 PM
>>
>>
>
>
> --
> Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-novice
>

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [GENERAL] setting Postgres client

YES! Done - my listen addresses was the default.

Thanks Richard!

Nina
-----Original Message-----
From: Richard Huxton [mailto:dev@archonet.com]
Sent: September 19, 2008 11:57
To: Markova, Nina
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] setting Postgres client

Markova, Nina wrote:
>
> Thanks Richard.
>
>
> I specified the host IP ( I use the default 5432 port), got error:
> psql: could not connect to server: Connection refused
> Is the server running on host "192.168.XX.XXX" and accepting
> TCP/IP connections on port 5432?
>
> The only tcp lines in my postgres.conf are
> #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
> # 0 selects the system default
> #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
> # 0 selects the system default
> #tcp_keepalives_count = 0 # TCP_KEEPCNT;
> # 0 selects the system default

> Should I change something here?

Check "listen_addresses" and "port" look OK. You're probably only
listening to localhost.

You can test by telnet-ing to port 5432 or using lsof / netstat to see
what connections you have open in that zone.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Proposal of SE-PostgreSQL patches (for CommitFest:Sep)

Robert Haas wrote:
>> [2] Make a new implementation of OS-independent fine grained access control
>>
>> If it is really really necessary, I may try to implement a new separated
>> fine-grained access control mechanism due to the CommitFest:Nov.
>> However, we don't have enough days to develop one more new feature from
>> the scratch by the deadline.
>
> +1.
>
> ...Robert

It's too early to vote. :-)

The second and third option have prerequisite.
The purpose of them is to match granularity of access controls
provided by SE-PostgreSQL and native PostgreSQL. However, I have
not seen a clear reason why these different security mechanisms
have to have same granuality in access controls.

As I mentioned before, it is quite natural that different security
mechanism provides its access controls in different granuality,
as widely accepted in Linux.

The reason is now unclear for me.

Thanks,
--
KaiGai Kohei <kaigai@kaigai.gr.jp>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] gsoc, oprrest function for text search take 2

Tom Lane wrote:
> =?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <j.urbanski@students.mimuw.edu.pl> writes:
>> ju219721@students.mimuw.edu.pl wrote:
>> Well whaddya know. It turned out that my new company has a
>> 'Fridays-are-for-any-opensource-hacking-you-like' policy, so I got a
>> full day to work on the patch.
>
> Hm, does their name start with G?

No ;) It's called Flumotion (http://www.flumotion.com/eng/).

>> Attached is a version that stores the minimal and maximal frequencies in
>> the Numbers array, has the aforementioned assertion and more nicely
>> ordered functions in ts_selfuncs.c.
>
> Excellent, I'll get to work on this version.

Great, thanks.

Jan

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] gsoc, oprrest function for text search take 2

=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <j.urbanski@students.mimuw.edu.pl> writes:
> ju219721@students.mimuw.edu.pl wrote:
> Well whaddya know. It turned out that my new company has a
> 'Fridays-are-for-any-opensource-hacking-you-like' policy, so I got a
> full day to work on the patch.

Hm, does their name start with G?

> Attached is a version that stores the minimal and maximal frequencies in
> the Numbers array, has the aforementioned assertion and more nicely
> ordered functions in ts_selfuncs.c.

Excellent, I'll get to work on this version.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgsql: Improve the recently-added libpq events code to provide more

Log Message:
-----------
Improve the recently-added libpq events code to provide more consistent
guarantees about whether event procedures will receive DESTROY events.
They no longer need to defend themselves against getting a DESTROY
without a successful prior CREATE.

Andrew Chernow

Modified Files:
--------------
pgsql/doc/src/sgml:
libpq.sgml (r1.261 -> r1.262)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/libpq.sgml?r1=1.261&r2=1.262)
pgsql/src/interfaces/libpq:
fe-exec.c (r1.198 -> r1.199)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/libpq/fe-exec.c?r1=1.198&r2=1.199)
libpq-events.c (r1.1 -> r1.2)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/libpq/libpq-events.c?r1=1.1&r2=1.2)
libpq-int.h (r1.132 -> r1.133)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/libpq/libpq-int.h?r1=1.132&r2=1.133)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] How to change log file language?

"Leif B. Kristensen" wrote:

>I don't know how this is handled in Windows, but on a Linux computer you
>can enter the directory /usr/local/share/locale/de/LC_MESSAGES/ and
>just rename or delete the file psql.mo.

Thanks for the tipp: After renaming folder
C:\Program Files\PostgreSQL\8.3\share\locale\de
to "de_" I have the english texts. But I cannot imagine that the language
cannot be altered after the cluster was initialized.

Any other suggestions?

Rainer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] Regaining superuser access

On Fri, September 19, 2008 07:39, Scott Marlowe wrote:
> On Fri, Sep 19, 2008 at 2:26 AM, Bernt Drange <badrange@gmail.com> wrote:
>> On Sep 18, 7:03 pm, alvhe...@commandprompt.com (Alvaro Herrera) wrote:
>>> Bernt Drange escribió:
>>>
>>> > After a lot of fiddling with being able to enter single user mode on
>>> a
>>> > windows machine (I had to figure out how to run the command line as
>>> > the correct user, then for some reason -D didn't work, but SET
>>> > PGDATA=xxx worked), I finally managed to fix my problem.
>>>
>>> Hmm, the -D thing not working should probably be studied -- perhaps
>>> we're missing escaping something somewhere. Does the PGDATA path
>>> contain spaces or weird chars?
>>
>> From memory the path was something like: F:\Postgresql Database\data.
>> I quoted it with double quotes. Without -D postgres.exe complained
>> about not finding the data path, with it postgres.exe complained about
>> not finding the config file, stating that it looked in (from memory
>> vague) F:\Postgresql Database\data\postgres\somethingmore. Adding the
>> --config-file parameter didn't help.
>>
>> Is this enough information for you to start digging a bit more? If
>> not, I might find the exact messages, but I'm reluctant to do it on
>> this production database..
>
> I'm pretty sure the problem is with the space between Postgresql and
> Database. Not sure if it's fixed in later releases or not.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>
Part of the problem may be the embedded space, as was already mentioned
(even though it shouldn't be an issue).

A good test would be to use the 8.3 directory naming convention which you
can get on Windows by using the "dir /x" command. On my system,
"C:\Program Files\" is shortened to "C:\PROGRA~1\". Obviously, you'd have
to look at every directory in the fully qualified filename to pull the 8.3
pathname.

And I'll go back to lurking and learning on the list.

Tim
KB0ODU

--
Timothy J. Bruce

visit my Website at: http://www.tbruce.com
Registered Linux User #325725

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [PERFORM] RAID arrays and performance

Matthew Wakeling <matthew@flymine.org> writes:
In order to improve the performance, I made the system look ahead in the  source, in groups of a thousand entries, so instead of running: SELECT * FROM table WHERE field = 'something'; a thousand times, we now run: SELECT * FROM table WHERE field IN ('something', 'something else'...); with a thousand things in the IN. Very simple query. It does run faster  than the individual queries, but it still takes quite a while. Here is an  example query:     

Have you considered temporary tables? Use COPY to put everything you want to query into a temporary table, then SELECT to join the results, and pull all of the results, doing additional processing (UPDATE) as you pull?

Cheers,
mark
--  Mark Mielke <mark@mielke.cc> 

Re: [HACKERS] [PATCHES] libpq events patch (with sgml docs)

Andrew Chernow <ac@esilo.com> writes:
>> To build on this analogy, PGEVT_CONNRESET is like a realloc. Meaning,
>> the initial malloc "PGEVT_REGISTER" worked by the realloc
>> "PGEVT_CONNRESET" didn't ... you still have free "PGEVT_CONNDESTROY" the
>> initial. Its documented that way. Basically if a register succeeds, a
>> destroy will always be sent regardless of what happens with a reset.

> I attached the wrong patch. I'm sorry.

I had a further thought about this: after applying this patch, it is
essentially useless for the exposed PQmakeEmptyPGresult function to
copy events into the result. If it doesn't give them a RESULTCREATE
call, then they cannot receive RESULTCOPY or RESULTDESTROY either,
so they might as well not be there.

The argument for not having PQmakeEmptyPGresult fire RESULTCREATE still
makes sense, but I am thinking that maybe what we ought to do is expose
a new function named something like PQfireResultCreateEvents() that just
does that. This would allow an application to exactly emulate what
PQgetResult does: make an empty PGresult, fill it, then fire the create
events.

I'll go ahead and apply this patch in a little bit, but if you concur
with the above reasoning, please put together a followon patch to add
such a function.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] gsoc, oprrest function for text search take 2

ju219721@students.mimuw.edu.pl wrote:
> Quoting Tom Lane <tgl@sss.pgh.pa.us>:
>
>> I wrote:
>>> ... One possibly
>>> performance-relevant point is to use DatumGetTextPP for detoasting;
>>> you've already paid the costs by using VARDATA_ANY etc, so you might
>>> as well get the benefit.
>>
>> Actually, wait a second. That code doesn't work at all on toasted data,
>> because it's trying to use VARSIZE_ANY_EXHDR() before detoasting.
>> That would give you the physical datum size (eg the size of the toast
>> pointer), not the number you need.
>>
>> However, this is actually not a problem because we know that the data
>> came from an array in pg_statistic, which means the individual members
>> *can't be toasted*. At least they can't be compressed or out-of-line.
>> We'd do that at the array level, it's not sensible to do it on an
>> individual array member.
>>
>> I think that right at the moment the array stuff doesn't permit short
>> headers either, but it would make sense to relax that someday. So I'd
>> recommend that your code allow either regular or short headers, but not
>> worry about compression or out-of-line storage.
>>
>> Which boils down to: keep using VARSIZE_ANY_EXHDR/VARDATA_ANY, but
>> forget the "detoasting" step. Maybe put in
>> Assert(!VARATT_IS_COMPRESSED(datum) && !VARATT_IS_EXTERNAL(datum))
>> instead.

Well whaddya know. It turned out that my new company has a
'Fridays-are-for-any-opensource-hacking-you-like' policy, so I got a
full day to work on the patch.
Attached is a version that stores the minimal and maximal frequencies in
the Numbers array, has the aforementioned assertion and more nicely
ordered functions in ts_selfuncs.c.

I tested it with oprofile and
pgbench -n -f tssel-bench.sql -t 1000 postgres
with tssel-bench.sql containing
select * from manuals where tsvector @@ to_tsquery('foo');

"manuals" has ~700 rows and 'foo' does not appear in any of the lexemes.

The results are:
=== CVS HEAD ===
scaling factor: 1
query mode: simple
number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 13.399584 (including connections establishing)
tps = 13.399972 (excluding connections establishing)

74069 34.7779 pglz_decompress
38560 18.1052 tsvectorout
7688 3.6098 pg_mblen
5366 2.5195 hash_search_with_hash_value
4833 2.2693 pg_utf_mblen
4718 2.2153 AllocSetAlloc
4041 1.8974 index_getnext
3100 1.4556 LWLockAcquire
3056 1.4349 hash_any
2843 1.3349 LWLockRelease
2611 1.2260 AllocSetFree
2126 0.9982 tsCompareString
2121 0.9959 _bt_compare
1830 0.8592 LockAcquire
1517 0.7123 toast_fetch_datum
1503 0.7057 .plt
1338 0.6282 _bt_checkkeys
1332 0.6254 FunctionCall2
1233 0.5789 ReadBuffer_common
1185 0.5564 slot_deform_tuple
1157 0.5433 TParserGet
1123 0.5273 LockRelease


=== PATCH ===
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 13.309346 (including connections establishing)
tps = 13.309761 (excluding connections establishing)

171514 35.0802 pglz_decompress
87231 17.8416 tsvectorout
17107 3.4989 pg_mblen
12514 2.5595 hash_search_with_hash_value
11124 2.2752 pg_utf_mblen
10739 2.1965 AllocSetAlloc
8534 1.7455 index_getnext
7460 1.5258 LWLockAcquire
6876 1.4064 LWLockRelease
6622 1.3544 hash_any
5773 1.1808 AllocSetFree
5210 1.0656 _bt_compare
4849 0.9918 tsCompareString
4043 0.8269 LockAcquire
3535 0.7230 .plt
3246 0.6639 _bt_checkkeys
3170 0.6484 toast_fetch_datum
3057 0.6253 FunctionCall2
2815 0.5758 ReadBuffer_common
2767 0.5659 TParserGet
2605 0.5328 slot_deform_tuple
2567 0.5250 MemoryContextAlloc

Cheers,
Jan

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin