Friday, September 12, 2008

Re: [JDBC] German translation fixes

Hello Daniel,

Daniel Migowski schrieb:
> While having a small look over the file, this type hit me:
>
> msgstr "Beim Erstellen eines Stack-Traces für {0} trat eine Exceptin
> auf: {1}"
>
> Exception is missing the 'o'.

*ahrgh*

> [...]

I've pinned that one down, too (as well as the others still hidden in
the file) and attached a corrected version to this message.

André

(Reading the file through in whole reminds me that I originally wanted
to nag Kris about some PGSQL-terms that I do not know the meaning of to
be able to supply a 100% translation... There are still some
uncertainties. So much to do, so little time.)

Re: [PATCHES] hash index improving v3

On Fri, Sep 12, 2008 at 3:14 AM, Zdenek Kotala <Zdenek.Kotala@sun.com> wrote:
> Alex Hunsaker napsal(a):
>>
>> On Wed, Sep 10, 2008 at 10:27 AM, Alex Hunsaker <badalex@gmail.com> wrote:
>>>
>>> On Wed, Sep 10, 2008 at 8:47 AM, Zdenek Kotala <Zdenek.Kotala@sun.com>
>>> wrote:
>>>>
>>>> What locale did you use? It would be nice to have also comparing between
>>>> C
>>>> and any UTF8 locale. I think we should see big benefit when non C locale
>>>> is
>>>> used.
>>>
>>> Err yes this was UTF8, Ill see about doing a C locale.
>>>
>>
>> And here it with a C locale:
>>
>> pgbench -c1 -n -t100000 -f bench.sql
>> cvs head: tps = 5142.784262
>> v5: tps = 6169.405965
>
>
> If I look on both results
>
> C UTF8 difference
> -----------------------------------
> cvs head: 5140 5050 -2%
> v5: 6170 5750 -7%
> improvement: 20% 14%
>
> than I little bit wonder. I personally expected bigger difference of UTF8
> comparing between CVS a v5. This test also shows that locale selection has
> bigger impact on performance in v5 case, but result is still better than cvs
> head.

Right, I think part of it is I need to try again with a larger
dataset... The reason I did three runs before was because it was
variable between (v5 between 5700 and 6200). Also the query im doing
ming be two simplistic because it runs to fast, thats part of why I
get the variable speed between runs (i think...)

Suggestions?

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Better auth errors from libpq

Joshua D. Drake wrote:
> Heikki Linnakangas wrote:
>
>>> The only thing I would say here is that you point the URL to current
>>> which will be wrong in one release. Perhaps something that pulls the
>>> pgversion macro?
>>
>> We don't put URLs in error messages. The hint needs to be a real sentence.
>
> Which is exactly why its the first thing we do when supporting people.
> Point them to the URL in the docs.

Let's add a new field in error reports: errurl()

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Better auth errors from libpq

Heikki Linnakangas wrote:

>> The only thing I would say here is that you point the URL to current
>> which will be wrong in one release. Perhaps something that pulls the
>> pgversion macro?
>
> We don't put URLs in error messages. The hint needs to be a real sentence.
>

Which is exactly why its the first thing we do when supporting people.
Point them to the URL in the docs.

Perhaps the middle man isn't needed.

Joshua D. Drake


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Synchronous Log Shipping Replication

Hi,

Andrew Dunstan wrote:
> If this were a nice pluggable library I'd agree, but AFAIK it's not, and
> I don't see great value in reinventing the wheel.

I certainly agree.

However, I thought of it more like the archive_command, as proposed by
Hannu. That way we don't need to reinvent any wheel and still the
standby could trigger the base data synchronization itself.

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Synchronous Log Shipping Replication

Csaba Nagy wrote:
> Why not have a design where the slave is in control for it's own data ?
> I mean the slave could ask for the base files (possibly through a
> special function deployed on the master), then ask for the WAL stream
> and so on. That would easily let a slave cascade too, as it could relay
> the WAL stream and serve the base backup too... or have a special WAL
> repository software with the same interface as a normal master, but
> having a choice of base backups and WAL streams. Plus that a slave in
> control approach would also allow multiple slaves at the same time for a
> given master...

I totally agree with that.

> The only downside of this approach is that the slave machine needs a
> full postgres super user connection to the master. That could be a
> security problem in certain scenarios.

I think the master-slave protocol needs to be separate from the normal
FE/BE protocol, with commands like "send a new base backup", or
"subscribe to new WAL that's generated". A master-slave connection isn't
associated with any individual database, for example. We can keep the
permissions required for establishing a master-slave connection
different from super-userness. In particular, while the slave will be
able to read all data from the whole cluster, by receiving it in the WAL
and base backups, it doesn't need to be able to modify anything in the
master.

> The master-centric scenario needs
> a connection in the other direction, which might be seen as more secure,
> I don't know for sure...

Which one initiates the connection, the master or slave, is a different
question. I believe we've all assumed that it's the slave that connects
to the master, and I think that makes the most sense.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [NOVICE] Flat File unload/export/copy

Regina,
I wasn't aware that you could!
I just tried and that syntax gave me an error:
rel_reinsure=# copy (select * from client where name like '%BERT%') to '/tmp/stevet.clients';
ERROR:  syntax error at or near "(" at character 6
LINE 1: copy (select * from client where name like '%BERT%') to '/tm...

- I can do though:
rel_reinsure=# copy client (recno,code,name) to '/tmp/stevet.clients';

... or select all columns, but that is 'too global' and I just want 'WHERE name LIKE '%BERT%' type syntax.


On Fri, 2008-09-12 at 11:31 -0400, Obe, Regina wrote:
 And why can't you use copy?   Something like below -   COPY (SELECT * FROM sometable WHERE somevalue LIKE '%') TO '/path/to/textfile.csv'  WITH NULL As 'NULL' CSV HEADER QUOTE AS '"';      ________________________________  From: pgsql-novice-owner@postgresql.org [mailto:pgsql-novice-owner@postgresql.org] On Behalf Of Steve T Sent: Friday, September 12, 2008 11:16 AM To: PostGreSQL Subject: [NOVICE] Flat File unload/export/copy   Is there a command that copies data from a table to a flat file for specified selection criteria? IE akin to a copy but not dumping a whole file? If I'm about to do a major change to data in a table, I'd like to be able to keep a copy of the data before the change (I know I can Begin...rollback should things go wrong).  Is there a command that I've missed? (Informix had an 'UNLOAD TO blah SELECT blah..' type command which presumably was proprietary).  ----------------------------------------- The substance of this message, including any attachments, may be confidential, legally privileged and/or exempt from disclosure pursuant to Massachusetts law. It is intended solely for the addressee. If you received this in error, please contact the sender and delete the material from any computer.   


Steve Tucknott
ReTSol Ltd

DDI:         01323 488548
Mobile:     0773 671 5772

Re: [JDBC] German translation fixes

Daniel Migowski schrieb:
> Oh, another one:
>
> - msgstr "Die EInfügezeile kann nicht aufgefrischt werden."
> Second I must be a small i.
>
> Daniel Migowski schrieb:
>> While having a small look over the file, this type hit me:
>>
>> msgstr "Beim Erstellen eines Stack-Traces für {0} trat eine Exceptin
>> auf: {1}"
>>
>> Exception is missing the 'o'.
>>
>> With best regards,
>> Daniel Migowski
>>> [...]

There are two more I just found. Will supply a fixed version in a few
moments.

Thanks a lot, Daniel!

André

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

[pgsql-jobs] [Fwd: I'm Interested In Posting On Your Site]

This post belongs to this list.

-------- Forwarded Message --------
> From: Jeremy Mollet <jeremy.mollet@Sapphire.com>
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] I'm Interested In Posting On Your Site
> Date: 11 Sep 2008 18:09:24 -0400
>
> Hello,
>
> My name is Jeremy Mollet and I'm very interested in posting a job on
> your site. It's with a great Company and located here in the San
> Francisco Bay Area. I have included the job description below. Let me
> know the process I need to follow in order to post. Thanks,
> Job Description
> Technology Firm in Walnut Creek, CA. is looking for a Postgres SQL
> DBA who can support a Perl/Postgres based financial billing system.
> Advanced Perl scripting, database design, and strong SQL skills are
> required. The position requires the ability to work and communicate
> effectively with other professionals in Information Systems, Finance
> and Legal disciplines. Previous financial system experience is a
> plus.
>
> Requirements:
>
> Postgres (or similar database experience): 5 years
> Linux/Unix Scripting: 3 years
> Perl: 5 years
>
>
>
> Jeremy Mollet
> Technical Recruiter
> Sapphire Technologies
> 27 Maiden Lane Suite 300
> San Francisco, Ca. 94108
> Office: 415-788-8488
> Fax: 1-415-788-2592
>
> Visit www.sapphire.com today to view almost 1000 active jobs
--
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [JDBC] German translation fixes

Oh, another one:

- msgstr "Die EInfügezeile kann nicht aufgefrischt werden."
Second I must be a small i.

Daniel Migowski schrieb:
> While having a small look over the file, this type hit me:
>
> msgstr "Beim Erstellen eines Stack-Traces für {0} trat eine Exceptin
> auf: {1}"
>
> Exception is missing the 'o'.
>
> With best regards,
> Daniel Migowski
>
> Andre Bialojahn schrieb:
>> Dear all,
>>
>> Peter Eisentraut schrieb:
>>
>>> Here is an updated de.po based on the latest version in CVS with
>>> some spelling fixes.
>>> [...]
>>>
>>
>> And I've finally managed to incorporate those changes in the latest
>> revision available through the CVS repository.
>>
>> I attached the manually updated file to this mail for you to integrate
>> it into the repository.
>>
>> Thanks a lot to Peter Eisentraut for supplying the corrections and for
>> nudging me today to finally submit the updates to the list.
>>
>> Kind regards,
>> André
>
>


--

|¯¯|¯¯| *IKOffice GmbH Daniel Migowski*
| | |/| Mail: dmigowski@ikoffice.de <mailto:dmigowski@ikoffice.de>
| | // | Nordstr. 10 Tel.: 0441 21 98 89 52
| | \\ | 26135 Oldenburg Fax.: 0441 21 98 89 55
|__|__|\| http://www.ikoffice.de Mob.: 0176 22 31 20 76


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [GENERAL] TSearch2: find a QUERY that does match a single document

On Fri, 12 Sep 2008, Dmitry Koterov wrote:

> Hello.
>
> TSearch2 allows to search a table of tsvectors by a single tsquery.
> I need to solve the reverse problem.
>
> *I have a large table of tsquery. I need to find all tsqueries in that table
> that match a single document tsvector:
> *
> CREATE TABLE "test"."test_tsq" (
> "id" SERIAL,
> "q" TSQUERY NOT NULL,
> CONSTRAINT "test_tsq_pkey" PRIMARY KEY("id")
> );
>
> insert into test.test_tsq(q)
> select to_tsquery(g || 'x' || g) from generate_series(100000, 900000) as g;
>
> explain analyze
> select * from test.test_tsq
> where to_tsvector('400000x400000') @@ q

why do you need tsvector @@ q ? Much better to use tsquery = tsquery

test=# explain analyze select * from test_tsq where q = '400000x400000'::tsque>
QUERY PLAN
-----------------------------------------------------------------------------------------------------------
Seq Scan on test_tsq (cost=0.00..16667.01 rows=1 width=38) (actual time=129.208..341.111 rows=1 loops=1)
Filter: (q = '''400000x400000'''::tsquery)
Total runtime: 341.134 ms
(3 rows)

Time: 341.478 ms


>
> This gets a strange explain analyze:
>
> QUERY PLAN
> Seq Scan on test_tsq (cost=0.00..17477.01 rows=800 width=36) (actual
> time=68.698..181.458 rows=1 loops=1)
> Filter: ('''400000x400000'':1'::tsvector @@ q)
> Total runtime: 181.484 ms
>
> No matter if I use GIST index on test_tsq.q or not: the explain analyze
> result is the same.
> So, why "rows=800"? The table contains much more rows...

'800' is the number of estimated rows, which is not good, since you got only
1 row.

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Synchronous Log Shipping Replication

Heikki Linnakangas wrote:
> Simon Riggs wrote:
>> Built-in? Why? I mean make base backup using rsync. That way only
>> changed data blocks need be migrated, so much faster.
>
> Yes, what I meant is that it would be cool to have that functionality
> built-in, so that you wouldn't need to configure extra rsync scripts
> and authentication etc.
>

If this were a nice pluggable library I'd agree, but AFAIK it's not, and
I don't see great value in reinventing the wheel.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [JDBC] German translation fixes

While having a small look over the file, this type hit me:

msgstr "Beim Erstellen eines Stack-Traces für {0} trat eine Exceptin
auf: {1}"

Exception is missing the 'o'.

With best regards,
Daniel Migowski

Andre Bialojahn schrieb:
> Dear all,
>
> Peter Eisentraut schrieb:
>
>> Here is an updated de.po based on the latest version in CVS with some spelling
>> fixes.
>> [...]
>>
>
> And I've finally managed to incorporate those changes in the latest
> revision available through the CVS repository.
>
> I attached the manually updated file to this mail for you to integrate
> it into the repository.
>
> Thanks a lot to Peter Eisentraut for supplying the corrections and for
> nudging me today to finally submit the updates to the list.
>
> Kind regards,
> André


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

[GENERAL] Fastest way to restore a database

I know that PostgreSQL is slow at restoring databases. But there are
some tricks to use when speeding it up. Here is a brief list I compiled
from reading the docs and reading some forums. Is there a definitive
list of things to do?

* Turn off fsync
So it won't flush after every commit
* Turn off full_page_writes
Don't write the WAL archives in a safe way. But we don't need WAL
archives during a restore. Future versions of postgres will let you turn
off WAL archives entirely
* Turn off archive_mode
This disables the archiving of WAL logs
* Increase the checkpoint_segments parameter (the default is 3 – so...
maybe 10?)
Increases the number of transactions that can happen before a WAL checkpoint
* The --single-transaction paremeter to pg_restore *might* speed it up
One transaction is more efficient, but an entire DB restore might be too
big of a transaction. So I'm not so sure about this option
* Increase the maintenance_work_mem setting to 512MB
Gives more memory to CREATE_INDEX commands, which is part of the restore
process
* (PostgreSql 8.3 only) Turn off synchronous_commit
This makes it so that the database won't wait for the WAL checkpoint to
be completed before moving on to the next operation. Again, we don't
want WAL archiving during a restore anyway.

Are any of the above items not going to help? Anything I'm missing? Is
there a way to disable foreign key constraints during the restore since
I know it is already a good database?


I am using postgreSQL 8.2.9 on Win32

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Converting string to IN query

On Fri, Sep 12, 2008 at 04:04:18PM +0100, Richard Huxton wrote:
> Andrus wrote:
> > String contains list of document numbers (integers) like:
> >
> > '1,3,4'
> >
> > How to SELECT documents whose numbers are contained in this string.
>
> > Numbers should be passed as single string literal since FYIReporting
> > RDLEngine does not allow multivalue parameters.
>
> Hmm - might be worth bringing that to their attention.

I'm probably missing something, but does PG?

> Try string_to_array(). Example:
>
> SELECT * FROM generate_series(1,10) s
> WHERE s = ANY (string_to_array('1,3,5,7', ',')::int[]);

I don't think you need the string_to_array function call, an array
literal should do perfectly well here. I.e.

WHERE s = ANY ('{1,2,3}'::int[]);

the literal can of course be a parameter as well:

WHERE s = ANY ($1::int[]);


Sam

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgus-general] provision for donations

On Thursday 11 September 2008 21:33:41 Josh Berkus wrote:
> Josh,
>
> > "Postgresql" :)
>
> I'm just saying that the proposal seems to be to involve the various user
> groups in pg.us fundraising through distribution of shirts in return for
> pg.us donations. Or did I miss something?
>

ISTM if pg.us buys the shirts, then they would likely spread them around to
any .us based groups. If pg.eu buys them they spread them around to .eu
groups. I don't think any further link is needed between the groups and the
purchasing party, since the .us or .eu would be deciding what best meets
thier organizational goals (~ spreading pg usage in the US for this group). I
suppose if you want an international distribution, you should propose pg.spi
buy shirts.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgus-general mailing list (pgus-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-general

Re: [JDBC] German translation fixes

Dear all,

Peter Eisentraut schrieb:
> Here is an updated de.po based on the latest version in CVS with some spelling
> fixes.
> [...]

And I've finally managed to incorporate those changes in the latest
revision available through the CVS repository.

I attached the manually updated file to this mail for you to integrate
it into the repository.

Thanks a lot to Peter Eisentraut for supplying the corrections and for
nudging me today to finally submit the updates to the list.

Kind regards,
André

[ADMIN] Recommend dba maintenance tasks on a regular bases

Hello,

We are currently using Postgresql 8.3.3 on Red Hat 4 and our largest database
is around 8454 MB.

I have recommend the below to my group but not sure if reindexing should be
involved since autovacuum is on?

How can I be sure auto vacumming is working fine? We haven't had any problems
plus I do a query and it does list all the tables and shows the last update
of auto vacuum and auto analyze. Is that it?


1- pg_dump - binary dump every midday and nightly
2 - auto vacuum
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 3
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.1
3- rotate data logs

--
Regards,

Barbara Stephenson
EDI Specialist/Programmer
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30507
tel: (678)989-3020 fax: (404)935-6171
barbara@turbocorp.com
www.ohlogistics.com

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Move src/tools/backend/ to wiki

Peter Eisentraut wrote:
> Alvaro Herrera wrote:
>> However I wonder how much value there really is in the developer's FAQ,
>> considering that some answers seem rather poor. For example the
>> answer on ereport() was wrong, and nobody ever pointed it out. The
>> answer on palloc/pfree is very incomplete too.
>
> I think the developer's FAQ has essentially been unmaintained for many
> years. I think we should gradually migrate the content to other wiki
> pages and eventually drop the FAQ.

In a way, this has already started; some answers are now just
pointers to other wiki pages or to the docs.

I think getting rid of the FAQ completely is not necessarily a good
idea; it seems useful as a collection of interesting questions. Moving
the contents to new pages is probably OK. Also, as the answers mature
on the Wiki, perhaps it'd be possible to move them to the SGML docs (and
reduce the Wiki answer to just a pointer).

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] framework webtranslator

Alvaro Herrera escribió:
> Moises Galan escribió:
> > hola lista. alguien tiene noticias sobre el respaldo a la base de
> > datos del webtranslator.
>
> Hablé hace un par de días con Mario y me dijo que recibió el respaldo
> más reciente, y que lo iba a levantar y pensaba tenerlo esta semana ...

OK, está funcionando.

http://l10n.postgresql.cl/webtranslator/es

El DNS fue actualizado hace poco, así que para poder acceder si tienen
la dirección IP antigua, pueden poner en /etc/hosts la siguiente línea:

74.53.238.210 l10n.postgresql.cl

--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"La experiencia nos dice que el hombre peló millones de veces las patatas,
pero era forzoso admitir la posibilidad de que en un caso entre millones,
las patatas pelarían al hombre" (Ijon Tichy)
--
TIP 4: No hagas 'kill -9' a postmaster

[GENERAL] TSearch2: find a QUERY that does match a single document

Hello.

TSearch2 allows to search a table of tsvectors by a single tsquery.
I need to solve the reverse problem.

I have a large table of tsquery. I need to find all tsqueries in that table that match a single document tsvector:

CREATE TABLE "test"."test_tsq" (
  "id" SERIAL,
  "q" TSQUERY NOT NULL,
  CONSTRAINT "test_tsq_pkey" PRIMARY KEY("id")
);

insert into test.test_tsq(q)
select to_tsquery(g || 'x' || g) from generate_series(100000, 900000) as g;

explain analyze
select * from test.test_tsq
where to_tsvector('400000x400000') @@ q

This gets a strange explain analyze:

QUERY PLAN
Seq Scan on test_tsq  (cost=0.00..17477.01 rows=800 width=36) (actual time=68.698..181.458 rows=1 loops=1)
  Filter: ('''400000x400000'':1'::tsvector @@ q)
Total runtime: 181.484 ms

No matter if I use GIST index on test_tsq.q or not: the explain analyze result is the same.
So, why "rows=800"? The table contains much more rows...

Re: [HACKERS] Move src/tools/backend/ to wiki

On Fri, Sep 12, 2008 at 10:29:54AM +0300, Peter Eisentraut wrote:
> Alvaro Herrera wrote:
>> However I wonder how much value there really is in the developer's
>> FAQ, considering that some answers seem rather poor. For example
>> the answer on ereport() was wrong, and nobody ever pointed it out.
>> The answer on palloc/pfree is very incomplete too.
>
> I think the developer's FAQ has essentially been unmaintained for
> many years. I think we should gradually migrate the content to
> other wiki pages and eventually drop the FAQ.

+1 :)

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Synchronous Log Shipping Replication

Simon Riggs wrote:
> Built-in? Why? I mean make base backup using rsync. That way only
> changed data blocks need be migrated, so much faster.

Yes, what I meant is that it would be cool to have that functionality
built-in, so that you wouldn't need to configure extra rsync scripts and
authentication etc.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [NOVICE] Flat File unload/export/copy

And why can't you use copy?

Something like below -

COPY (SELECT * FROM sometable WHERE somevalue LIKE '%') TO
'/path/to/textfile.csv'

WITH NULL As 'NULL' CSV HEADER QUOTE AS '"';




________________________________

From: pgsql-novice-owner@postgresql.org
[mailto:pgsql-novice-owner@postgresql.org] On Behalf Of Steve T
Sent: Friday, September 12, 2008 11:16 AM
To: PostGreSQL
Subject: [NOVICE] Flat File unload/export/copy


Is there a command that copies data from a table to a flat file for
specified selection criteria?
IE akin to a copy but not dumping a whole file?
If I'm about to do a major change to data in a table, I'd like to be
able to keep a copy of the data before the change (I know I can
Begin...rollback should things go wrong).

Is there a command that I've missed? (Informix had an 'UNLOAD TO blah
SELECT blah..' type command which presumably was proprietary).
-----------------------------------------
The substance of this message, including any attachments, may be
confidential, legally privileged and/or exempt from disclosure
pursuant to Massachusetts law. It is intended
solely for the addressee. If you received this in error, please
contact the sender and delete the material from any computer.


--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [HACKERS] New FSM patch

Tom Lane wrote:
> Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
>> Let me describe this test case first:
>> - The test program calls RecordAndGetPageWithFreeSpace in a tight loop,
>> with random values.
>
> What's the distribution of the random values, exactly? In particular,
> how do the request sizes compare to available free space per-page?

The request, and "old avail" sizes are in the range of 0-8100
(random()%8100).

> The design intent for FSM was that we'd not bother to record pages that
> have less free space than the average request size, so as to (usually)
> avoid the problem of uselessly searching a lot of entries. I can't tell
> whether your test case models that behavior at all. If it does then
> there may be something else that needs fixing.

Probably not. The test case starts with a table that's practically
empty, so all pages are put into the FSM.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] New FSM patch

Zdenek Kotala wrote:
> It looks likes that there are lot of lock issues on FSM pages. When
> number of FSM pages is increased then number of collisions is lower. It
> is probably why 2 clients significantly speed up between 33MB and 333MB.

Yes, that's what I thought as well. With table size under 33 MB, the FSM
consists of just one (bottom-level) FSM page,

> I think it is time to take DTrace ;-).
> Do you have any machine with DTrace support?

No.

> If not send me your test
> suit and I will try it run on my machine.

Sure, here you are. tests.sh is the main script to run. You'll need to
adjusts the paths there for your environment.

As it is, the tests will take many hours to run, so you'll probably want
to modify tests.sh and pgbenchtests.sh to reduce the number of
iterations. At least on my server, the variance in the numbers was very
small, so repeating the tests 4 times in tests.sh is probably overkill.

Thanks!

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

[NOVICE] Flat File unload/export/copy

Is there a command that copies data from a table to a flat file for  specified selection criteria?
IE akin to a copy but not dumping a whole file?
If I'm about to do a major change to data in a table, I'd like to be able to keep a copy of the data before the change (I know I can Begin...rollback should things go wrong).

Is there a command that I've missed? (Informix had an 'UNLOAD TO blah SELECT blah..' type command which presumably was proprietary).

Re: [HACKERS] Synchronous Log Shipping Replication

On Fri, 2008-09-12 at 17:24 +0300, Hannu Krosing wrote:
> On Fri, 2008-09-12 at 17:08 +0300, Heikki Linnakangas wrote:
> > Hmm, built-in rsync capability would be cool. Probably not in the first
> > phase, though..
>
> We have it for WAL shipping, in form of GUC "archive_command" :)
>
> Why not add full_backup_command ?

I see the current design is all master-push centered, i.e. the master is
in control of everything WAL related. That makes it hard to create a
slave which is simply pointed to the server and takes all it's data from
there...

Why not have a design where the slave is in control for it's own data ?
I mean the slave could ask for the base files (possibly through a
special function deployed on the master), then ask for the WAL stream
and so on. That would easily let a slave cascade too, as it could relay
the WAL stream and serve the base backup too... or have a special WAL
repository software with the same interface as a normal master, but
having a choice of base backups and WAL streams. Plus that a slave in
control approach would also allow multiple slaves at the same time for a
given master...

The way it would work would be something like:

* configure the slave with a postgres connection to the master;
* the slave will connect and set up some meta data on the master
identifying itself and telling the master to keep the WAL needed by this
slave, and also get some meta data about the master's details if needed;
* the slave will call a special function on the slave and ask for the
base backup to be streamed (potentially compressed with special
knowledge of postgres internals);
* once the base backup is streamed, or possibly in parallel, ask for
streaming the WAL files;
* when the base backup is finished, start applying the WAL stream, which
is cached in the meantime, and it it's streaming continues;
* keep the master updated about the state of the slave, so the master
can know if it needs to keep the WAL files which were not yet streamed;
* in case of network error, the slave connects again and starts to
stream the WAL from where it was left;
* in case of extended network outage, the master could decide to
unsubscribe the slave when a certain time-out happened;
* when the slave finds itself unsubscribed after a longer disconnection,
it could ask for a new base backup based on differences only... some
kind of built in rsync thingy;

The only downside of this approach is that the slave machine needs a
full postgres super user connection to the master. That could be a
security problem in certain scenarios. The master-centric scenario needs
a connection in the other direction, which might be seen as more secure,
I don't know for sure...

Cheers,
Csaba.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Converting string to IN query

Andrus wrote:
> String contains list of document numbers (integers) like:
>
> '1,3,4'
>
> How to SELECT documents whose numbers are contained in this string.

> Numbers should be passed as single string literal since FYIReporting
> RDLEngine does not allow multivalue parameters.

Hmm - might be worth bringing that to their attention.

Try string_to_array(). Example:

SELECT * FROM generate_series(1,10) s
WHERE s = ANY (string_to_array('1,3,5,7', ',')::int[]);

Note that I'm casting it to an array of integers so the "= ANY" knows
what types it will need to match.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] stackbuilder - wizard: New icon

Log Message:
-----------
New icon

Modified Files:
--------------
wizard/include/images:
StackBuilder.xpm (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/images/StackBuilder.xpm.diff?r1=1.1&r2=1.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-es-ayuda] Organizacion del PSDP-es

Cuando este lo suficientemente maduro (ustedes diran) me avisan y yo
dono el dominio sin problemas. Puedo pagar con mi tarjeta.

El proyecto tiene algun mentor que se ocupe de su gestion? Alguien que
vogue por el interes del mismo y su avanze?

gb.-

On Fri, Sep 12, 2008 at 7:22 AM, Moises Galan <pgsqlcol@gmail.com> wrote:
> gracias teofilo por tu colaboracion, tendremos que buscar ha alguien
> que nos done el dominio o en su defecto pagarlo, para la pagina
> sugiero el dominio (psdp-es.org), para el diseño de la pagina web
> podriamos hacer algo sencillo que no requiera flash, no se si alguien
> nos pueda ayudar con eso,de todas formas tambien se me ocurre que
> podria ser una wiki.
>
> Es bueno que tengas tiempo, lo necesitaremos,
> --
> TIP 4: No hagas 'kill -9' a postmaster
>

--
Guido Barosio
-----------------------
http://www.globant.com
guido.barosio@globant.com
--
TIP 9: visita nuestro canal de IRC #postgresql-es en irc.freenode.net

[COMMITTERS] pgsql: Skip opfamily check in eclass_matches_any_index() when the index

Log Message:
-----------
Skip opfamily check in eclass_matches_any_index() when the index isn't a
btree. We can't easily tell whether clauses generated from the equivalence
class could be used with such an index, so just assume that they might be.
This bit of over-optimization prevented use of non-btree indexes for nestloop
inner indexscans, in any case where the join uses an equality operator that
is also a btree operator --- which in particular is typically true for hash
indexes. Noted while trying to test the current hash index patch.

Tags:
----
REL8_3_STABLE

Modified Files:
--------------
pgsql/src/backend/optimizer/path:
indxpath.c (r1.227 -> r1.227.2.1)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/path/indxpath.c?r1=1.227&r2=1.227.2.1)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] pgsql: Skip opfamily check in eclass_matches_any_index() when the index

Log Message:
-----------
Skip opfamily check in eclass_matches_any_index() when the index isn't a
btree. We can't easily tell whether clauses generated from the equivalence
class could be used with such an index, so just assume that they might be.
This bit of over-optimization prevented use of non-btree indexes for nestloop
inner indexscans, in any case where the join uses an equality operator that
is also a btree operator --- which in particular is typically true for hash
indexes. Noted while trying to test the current hash index patch.

Modified Files:
--------------
pgsql/src/backend/optimizer/path:
indxpath.c (r1.232 -> r1.233)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/path/indxpath.c?r1=1.232&r2=1.233)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Synchronous Log Shipping Replication

On Fri, 2008-09-12 at 17:08 +0300, Heikki Linnakangas wrote:
> Simon Riggs wrote:

> I think we'll need an option to specify a maximum for the number of WAL
> files to keep around. The DBA should set that to the size of the WAL
> drive, minus some safety factor.
>
> > It should be clear that to make this work you must run with a base
> > backup that was derived correctly on the current master. You can do that
> > by re-copying everything, or you can do that by just shipping changed
> > blocks (rsync etc). So I don't see a problem in the first place.
>
> Hmm, built-in rsync capability would be cool. Probably not in the first
> phase, though..

Built-in? Why? I mean make base backup using rsync. That way only
changed data blocks need be migrated, so much faster.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Transaction Snapshots and Hot Standby

On Fri, 2008-09-12 at 15:08 +0300, Hannu Krosing wrote:
> On Fri, 2008-09-12 at 13:54 +0200, Csaba Nagy wrote:
> > > I think that enabling long-running queries this way is both
> > > low-hanging
> > > fruit (or at least medium-height-hanging ;) ) and also consistent to
> > > PostgreSQL philosophy of not replication effort. As an example we trust
> > > OS's file system cache and don't try to write our own.
> >
> > I have again questions (unfortunately I only have questions usually):
> >
> > * how will the buffers keep 2 different versions of the same page ?
>
> As the FS snapshot is mounted as a different directory, it will have
> it's own buffer pages.

RelFileNode has a spcNode which can be redirected to a temporary
filesystem snapshot. So its relatively easy to imagine redirecting
access to a table from its normal tablespace to the snapshot one.

> To conserve RAM, one could go to FS snapshot files only in case main
> pages have LSN too big to be trusted.

That would mean you'd need to do two I/Os, one to get the newly changed
page to get its LSN and another to get the old COW copy. We might waste
buffer space with that technique also. Since we'd be trying to avoid
cacheing bigger tables anyway (since 8.3) it seems easier to just go
straight to the COW copy.

So I think its fairly straightforward to support temporary snapshots in
Postgres, with creation/destruction handled in the way you say.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] pg_regress inputdir

Tom Lane wrote:
> Peter Eisentraut <peter_e@gmx.net> writes:

> > I think the appropriate interface would be adding another option to
> > pg_regress called --workdir or --tempdir, which defaults to PWD, and
> > write the converted sql files there, and then look for the sql files to
> > execute in workdir/sql and in inputdir/sql. In some way, this copies
> > the vpath search mechanism.
>
> That would be required to make pg_regress run as far as its own
> facilities are concerned. But I think Alvaro is worried about something
> at a higher level: the regression test process as a whole has some
> directory layout assumptions built into it, particularly in regards
> to where to find .so's. If we don't have a workable solution for that
> it's not really going to help to change pg_regress like this.

Maybe the same work dir can be used as a place to store the shared
objects. I think all it'd require is to change @abs_builddir@ to point
to workdir.

That should work fine as long as nobody attempts to put the workdir in
some mount point that's marked noexec (which is somewhat common with
/tmp)

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] stackbuilder - wizard: New icon

Log Message:
-----------
New icon

Modified Files:
--------------
wizard/include/images:
StackBuilder.icns (r1.1 -> r1.2)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/images/StackBuilder.icns.diff?r1=1.1&r2=1.2)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] stackbuilder - wizard: New icon

Log Message:
-----------
New icon

Modified Files:
--------------
wizard/include/images:
StackBuilder.ico (r1.3 -> r1.4)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/include/images/StackBuilder.ico.diff?r1=1.3&r2=1.4)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [pgsql-es-ayuda] Restar a todos los nĂºmeros de un campo.

Pablo Braulio escribió:

>
> Pero si trato de aplicar esto en un update: update tabla set
> serie=(select serie-129 from ..), me muestra un error pues la consulta
> devuelve mas de un dato.

update table set serie = serie - 129

Algún día deberías hacerte a la idea de leer un libro sobre SQL

--
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
Oh, oh, las chicas galacianas, lo harán por las perlas,
¡Y las de Arrakis por el agua! Pero si buscas damas
Que se consuman como llamas, ¡Prueba una hija de Caladan! (Gurney Halleck)
--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
(envía "unregister TuDirecciónDeCorreo" a majordomo@postgresql.org)

Re: [PERFORM] Choosing a filesystem

On Fri, Sep 12, 2008 at 5:11 AM, Greg Smith <gsmith@gregsmith.com> wrote:
> On Fri, 12 Sep 2008, Guillaume Cottenceau wrote:
>
> That's the main thing, and nothing else you can do will accelerate that.
> Without a useful write cache (which usually means RAM with a BBU), you'll at
> best get about 100-200 write transactions per second for any one client, and
> something like 500/second even with lots of clients (queued up transaction
> fsyncs do get combined). Those numbers increase to several thousand per
> second the minute there's a good caching controller in the mix.

While this is correct, if heavy writing is sustained, especially on
large databases, you will eventually outrun the write cache on the
controller and things will start to degrade towards the slow case. So
it's fairer to say that caching raid controllers burst up to several
thousand per second, with a sustained write rate somewhat better than
write-through but much worse than the burst rate.

How fast things degrade from the burst rate depends on certain
factors...how big the database is relative to the o/s read cache in
the controller write cache, and how random the i/o is generally. One
thing raid controllers are great at is smoothing bursty i/o during
checkpoints for example.

Unfortunately when you outrun cache on raid controllers the behavior
is not always very pleasant...in at least one case I've experienced
(perc 5/i) when the cache fills up the card decides to clear it before
continuing. This means that if fsync is on, you get unpredictable
random freezing pauses while the cache is clearing.

merlin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Parsing of pg_hba.conf and authentication inconsistencies

On Fri, 12 Sep 2008 06:53:55 +1000
"Brendan Jurd" <direvus@gmail.com> wrote:
> Josh has assigned your patch to me for an initial review.

And me.

> First up I'd like to say that this is a really nice upgrade.
> Shielding a running server from reloading a bogus conf file makes a
> whole lot of sense.

Yes.

> The patch applied cleanly to HEAD, compiled fine on amd64 gentoo and

I had a small problem compiling. I'm not sure why it would be
different for me. I run NetBSD -current. Here is the error:

../../../src/include/libpq/hba.h:51: error: field 'addr' has incomplete
type

I was able to fix this by adding the following line to hba.h:

#include "libpq/pqcomm.h" /* needed for struct sockaddr_storage */

--
D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] hash index improving v3

On Thu, Sep 11, 2008 at 08:51:53PM -0600, Alex Hunsaker wrote:
> On Thu, Sep 11, 2008 at 9:24 AM, Kenneth Marshall <ktm@rice.edu> wrote:
> > Alex,
> >
> > I meant to check the performance with increasing numbers of collisions,
> > not increasing size of the hashed item. In other words, something like
> > this:
> >
> > for ($coll=500; $i<=1000000; $i=$i*2) {
> > for ($i=0; $i<=1000000; $i++) {
> > hash(int8 $i);
> > }
> > # add the appropriate number of collisions, distributed evenly to
> > # minimize the packing overrun problem
> > for ($dup=0; $dup<=$coll; $dup++) {
> > hash(int8 MAX_INT + $dup * 1000000/$coll);
> > }
> > }
> >
> > Ken
>
> *doh* right something like this...
>
> create or replace function create_test_hash() returns bool as $$
> declare
> coll integer default 500;
> -- tweak this to where create index gets really slow
> max_coll integer default 1000000;
> begin
> loop
> execute 'create table test_hash_'|| coll ||'(num int8);';
> execute 'insert into test_hash_'|| coll ||' (num) select n
> from generate_series(0, '|| max_coll ||') as n;';
> execute 'insert into test_hash_'|| coll ||' (num) select
> (n+4294967296) * '|| max_col ||'/'|| coll ||'::int from
> generate_series(0, '|| coll ||') as n;';
>
> coll := coll * 2;
>
> exit when coll >= max_coll;
> end loop;
> return true;
> end;
> $$ language 'plpgsql';
>
> And then benchmark each table, and for extra credit cluster the table
> on the index and benchmark that.
>
> Also obviously with the hashint8 which just ignores the top 32 bits.
>
> Right?
>
Yes, that is exactly right.

Ken

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Synchronous Log Shipping Replication

On Fri, 2008-09-12 at 17:08 +0300, Heikki Linnakangas wrote:
> Simon Riggs wrote:
>
> > It should be clear that to make this work you must run with a base
> > backup that was derived correctly on the current master. You can do that
> > by re-copying everything, or you can do that by just shipping changed
> > blocks (rsync etc). So I don't see a problem in the first place.
>
> Hmm, built-in rsync capability would be cool. Probably not in the first
> phase, though..

We have it for WAL shipping, in form of GUC "archive_command" :)

Why not add full_backup_command ?

--------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] open source ERD for postgresql database

I've been using GNU ferret for a while, it's OK
for simple tasks, and can produce table graphs and
even output rudimentary PostgreSQL DDL in Version 0.6,
but it doesn't support PostgreSQL's full range of types yet
and the handling is somewhat awkward.

Version 0.7 looks much more promising, at least from
the screen shots, but that hasn't been release yet:

http://www.gnuferret.org/

On Fri, 2008-09-12 at 09:59 -0400, Barbara Stephenson wrote:
> I would like to use an ERD tool for postgres and it be open source. Any
> suggestions?
> --
> Regards,
>
> Barbara Stephenson
> EDI Specialist/Programmer
> Turbo, division of Ozburn-Hessey Logistics
> 2251 Jesse Jewell Pkwy NE
> Gainesville, GA 30507
> tel: (678)989-3020 fax: (404)935-6171
> barbara@turbocorp.com
> www.ohlogistics.com
>

Re: [pgsql-es-ayuda] Organizacion del PSDP-es

gracias teofilo por tu colaboracion, tendremos que buscar ha alguien
que nos done el dominio o en su defecto pagarlo, para la pagina
sugiero el dominio (psdp-es.org), para el diseño de la pagina web
podriamos hacer algo sencillo que no requiera flash, no se si alguien
nos pueda ayudar con eso,de todas formas tambien se me ocurre que
podria ser una wiki.

Es bueno que tengas tiempo, lo necesitaremos,
--
TIP 4: No hagas 'kill -9' a postmaster

[PERFORM] Postgres Performance on CPU limited Platforms

I'm trying to optimize postgres performance on a headless solid state
hardware platform (no fans or disks). I have the database stored on a
USB 2.0 flash drive (hdparm benchmarks reads at 10 MB/s). Performance is
limited by the 533Mhz CPU.

Hardware:
IXP425 XScale (big endian) 533Mhz 64MB RAM
USB 2.0 Flash Drive

Software:
Linux 2.6.21.4
postgres 8.2.5

I created a fresh database using initdb, then added one table.

Here is the create table:
CREATE TABLE archivetbl
(
"DateTime" timestamp without time zone,
"StationNum" smallint,
"DeviceDateTime" timestamp without time zone,
"DeviceNum" smallint,
"Tagname" character(64),
"Value" double precision,
"Online" boolean
)
WITH (OIDS=FALSE);
ALTER TABLE archivetbl OWNER TO novatech;

I've attached my postgresql.conf

I populated the table with 38098 rows.

I'm doing this simple query:
select * from archivetbl;

It takes 79 seconds to complete the query (when postgres is compiled
with -O2). I'm running the query from pgadmin3 over TCP/IP.

top shows CPU usage is at 100% with 95% being in userspace. oprofile
shows memset is using 58% of the CPU cycles!

CPU: ARM/XScale PMU2, speed 0 MHz (estimated)
Counted CPU_CYCLES events (clock cycles counter) with a unit mask of
0x00 (No unit mask) count 100000
samples % app name symbol name
288445 57.9263 libc-2.5.so memset
33273 6.6820 vmlinux default_idle
27910 5.6050 vmlinux cpu_idle
12611 2.5326 vmlinux schedule
8803 1.7678 libc-2.5.so __printf_fp
7448 1.4957 postgres dopr
6404 1.2861 libc-2.5.so vfprintf
6398 1.2849 oprofiled (no symbols)
4992 1.0025 postgres __udivdi3
4818 0.9676 vmlinux run_timer_softirq


I was having trouble getting oprofile to give a back trace for memset
(probably because my libc is optimized). So I redefined MemSet to call this:
void * gmm_memset(void *s, int c, size_t n)
{
int i=0;
unsigned char * p = (unsigned char *)s;
for(i=0; i<n; i++)
{
p[i]=0;
}
return s;
}

Here are the oprofile results for the same select query.

CPU: ARM/XScale PMU2, speed 0 MHz (estimated)
Counted CPU_CYCLES events (clock cycles counter) with a unit mask of
0x00 (No unit mask) count 100000
samples % image name app name
symbol name
-------------------------------------------------------------------------------
1 5.2e-04 postgres postgres
LockAcquire
1 5.2e-04 postgres postgres
set_ps_display
20 0.0103 postgres postgres
pg_vsprintf
116695 60.2947 postgres postgres dopr
116717 60.3061 postgres postgres
gmm_memset
116717 60.3061 postgres postgres
gmm_memset [self]
-------------------------------------------------------------------------------
20304 10.4908 oprofiled oprofiled (no
symbols)
20304 10.4908 oprofiled oprofiled
(no symbols) [self]
-------------------------------------------------------------------------------
4587 2.3700 vmlinux vmlinux
rest_init
6627 3.4241 vmlinux vmlinux
cpu_idle
11214 5.7941 vmlinux vmlinux
default_idle
11214 5.7941 vmlinux vmlinux
default_idle [self]
-------------------------------------------------------------------------------
16151 8.3450 vmlinux vmlinux
rest_init
9524 4.9209 vmlinux vmlinux cpu_idle
9524 4.9209 vmlinux vmlinux
cpu_idle [self]
6627 3.4241 vmlinux vmlinux
default_idle
-------------------------------------------------------------------------------
5111 2.6408 oprofile oprofile (no
symbols)
5111 2.6408 oprofile oprofile
(no symbols) [self]

oprofile shows dopr is making most of the calls to memset.

Are these results typical? If memset is indeed using over 50% of the CPU
something seems seriously wrong.

Should I be expecting more performance from this hardware than what I'm
getting in these tests?

Regards,
George McCollister

Re: [HACKERS] Synchronous Log Shipping Replication

Simon Riggs wrote:
> If we were going to recover from failed-over standby back to original
> master just via WAL logs we would need all of the WAL files from the
> point of failover. So you'd need to be storing all WAL file just in case
> the old master recovers. I can't believe doing that would be the common
> case, because its so impractical and most people would run out of disk
> space and need to delete WAL files.

Depends on the transaction volume and database size of course. It's
actually not any different from the scenario where the slave goes
offline for some reason. You have the the same decision there of how
long to keep the WAL files in the master, in case the slave wakes up.

I think we'll need an option to specify a maximum for the number of WAL
files to keep around. The DBA should set that to the size of the WAL
drive, minus some safety factor.

> It should be clear that to make this work you must run with a base
> backup that was derived correctly on the current master. You can do that
> by re-copying everything, or you can do that by just shipping changed
> blocks (rsync etc). So I don't see a problem in the first place.

Hmm, built-in rsync capability would be cool. Probably not in the first
phase, though..

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] stackbuilder - wizard: update Japanese.

Log Message:
-----------
update Japanese.

Modified Files:
--------------
wizard/i18n/ja_JP:
StackBuilder.po (r1.4 -> r1.5)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/i18n/ja_JP/StackBuilder.po.diff?r1=1.4&r2=1.5)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [GENERAL] Converting string to IN query

Andrus escreveu:
> <snip>
> SELECT * FROM invoices WHERE invoiceno IN ( '1,3,4' );
>
> but this causes error.
> <snip>

change it to ( '1','3','4' ) or ( 1,3,4 )

--

ACV


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[ADMIN] open source ERD for postgresql database

I would like to use an ERD tool for postgres and it be open source. Any
suggestions?
--
Regards,

Barbara Stephenson
EDI Specialist/Programmer
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30507
tel: (678)989-3020 fax: (404)935-6171
barbara@turbocorp.com
www.ohlogistics.com

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin