Sunday, June 22, 2008

Re: [JDBC] PreparedStatement implementation needet

zz_11@mail.bg wrote:
> Hi all,
>
> I can not find the source code of PreparedStatement for postgresql.
> I need to transfer some data from oracle db to pg and need to check the
> implementation ot setString in pg.
>
> Pls. help me to find the source of setString in pg jdbc source.

http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/jdbc/pgjdbc/org/postgresql/jdbc2/AbstractJdbc2Statement.java


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

[pgsql-jobs] PostgreSQL DBA required in Japan

eSynapse are a startup based in Kojimachi, Tokyo, providing a
PostgreSQL-based ASP Ecommerce platform.

We are looking for an experienced PostgreSQL DBA to manage our database
environments, and provide database design skills to assist developers
with schema creation.

Skills and Experience:
5 Years DB Development
3 Years DBA
1 Year PostgreSQL
Experience with HA Linux and PostgreSQL Clustering
Good English
Business Level Japanese

We are able to provide Visa sponsorship, but no relocation assistance.

Please reply by email with CV, availability and salary expectations.

--
Rob Cawte
CTO
eSynapse Co., Ltd.
Commercial Blogging and Ecommerce
http://www.esynapse.co.jp/

--
Sent via pgsql-jobs mailing list (pgsql-jobs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jobs

Re: [HACKERS] pg_stat_statements

I wrote:
> I will try to measure overheads of logging in some implementation:
> 1. Log statements and dump them into server logs.
> 2. Log statements and filter them before to be written.
> 3. Store statements in shared memory.
> I know 1 is slow, but I don't know what part of it is really slow;

I tested overheads of SQL logging with pgbench.
$ pgbench -s10 -c10 -t10000 -n -S -M prepared

logging type | tps | %
-----------------------+-------+--------
0. no logging | 10651 | 100.0%
1. log to pg_log/* | 6535 | 61.4%
2. log to /dev/null | 8347 | 78.4%
3. store in memory | 10280 | 96.5%

As expected, 1 is 40% slower than no logging settings. Also, filtering
logs before written into files seems not to be a perfect solution.
Redirecting logs to /dev/null is the *fastest* filter, but there was
30% of overhead. On the other hand, 3 has only 3.5% of overhead.

I think storing SQLs in server memory is worth trying even if there
are some troubles, for example, memory management. We can use either
hooks and dtrace for the purpose, but I'm working hook-method because
of portability.

I'll send a core patch and an extension module to -patches. I hope only
the patch is to be applied in the core. The extension module would be
better to be developed separately from the core.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[PATCHES] WIP: executor_hook for pg_stat_statements

I'm working on light-weight SQL logging for PostgreSQL.
http://archives.postgresql.org/pgsql-hackers/2008-06/msg00601.php

I divide the SQL logging feature into a core patch and an extension module.
I hope only the patch is to be applied in the core. The extension module
would be better to be developed separately from the core.


The attached patch (executor_hook.patch) modifies HEAD as follows.

- Add "tag" field (uint32) into PlannedStmt.
- Add executor_hook to replace ExecutePlan().
- Move ExecutePlan() to a global function.


The archive file (pg_stat_statements.tar.gz) is a sample extension module.
It uses the existing planner_hook and the new executor_hook to record
statements on planned and executed. You can see all of executed statements
through the following VIEW:

View "public.pg_stat_statements"
Column | Type | Description
------------+--------+------------------------------------
userid | oid | user id who execute the statement
datid | oid | target database
query | text | query's SQL text
planned | bigint | number of planned
calls | bigint | number of executed
total_time | bigint | total executing time in msec

Here is a sample output of the view.

postgres=# SELECT pg_stat_statements_reset();
$ pgbench -c10 -t1000 -M prepared
postgres=# SELECT * FROM pg_stat_statements ORDER BY query;
userid | datid | query | planned | calls | total_time
--------+-------+-----------------------------------------------------------------------------------------------+---------+-------+------------
10 | 11505 | INSERT INTO history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP); | 10 | 10000 | 196
10 | 11505 | SELECT * FROM pg_stat_statements ORDER BY query; | 1 | 0 | 0
10 | 11505 | SELECT abalance FROM accounts WHERE aid = $1; | 10 | 10000 | 288
10 | 11505 | UPDATE accounts SET abalance = abalance + $1 WHERE aid = $2; | 10 | 10000 | 1269
10 | 11505 | UPDATE branches SET bbalance = bbalance + $1 WHERE bid = $2; | 10 | 10000 | 21737
10 | 11505 | UPDATE tellers SET tbalance = tbalance + $1 WHERE tid = $2; | 10 | 10000 | 6950
10 | 11505 | delete from history | 1 | 1 | 0
10 | 11505 | select count(*) from branches | 1 | 1 | 0
(8 rows)

You need to add the below options in postgresql.conf.
shared_preload_libraries = 'pg_stat_statements'
custom_variable_classes = 'statspack'
statspack.max_statements = 1000 # max number of distinct statements
statspack.statement_buffer = 1024 # buffer to record SQL text

This module is WIP and far from complete. It allocates fixed shared
memory and record SQLs there, but doesn't handle out-of-memory situaton
for now. Also, It can handle statements using extended prorocol or
prepared statements, but not simple protocol queries. And every user
can view other user's queries.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

Re: [HACKERS] Postgres + Window manager

Josh,

Can i hope to your interest ?

>>> I stated details of proposal on page 2-13, 67-75 of pdf-document
>>> http://sql50.euro.ru/sql5.16.4.pdf , and i ask to implement it.
>>> All my proposals are public domain.
JB>> I'm confused. You're planning to develop this
DT> I can't make this alone.
JB>> or you're looking for someone else to?
DT> Yes, i'm looking for programmers, which agree to support this
DT> initiative.


Dmitry Turin
SQL5 (5.16.4)

http://sql50.narod.ru/sql5.16.4.pdf

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[BUGS] BUG #4259: I don't understand this query

The following bug has been logged online:

Bug reference: 4259
Logged by: kannan
Email address: kannan_siva70@yahoo.com
PostgreSQL version: 2005
Operating system: Windows 2003 Server
Description: I don't understand this query
Details:

create table [dbo].[olapobjects] (
[id] [varchar] (36) collate sql_latin1_Genenral_Cp1_Ci_As not null,
[parent id] [varchar] (36) collate sql_latin1_genenral_cp1_ci_As not
null,
[objectname] [varchar] (150) collate sql_latin1_genenral_cp1_ci_As not
null,
[classtype] [int] not null,
[object definition] [ntext] collate sql_latin1_genenral_cp1_ci_As null,
[lastupdated] [datetime] null,
[changed] [bit] nill,
[version] [int] null
) on [primary] textimage_on [primary]

Go

create Table [dbo].[server] (
[objectdefinition] [ntext] collate sql_latin1_genenral_cp1_ci_As null,
) on [primary] textimage_on [primary]

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] partitioned table insert triggers

On Mon, 2008-06-23 at 10:18 +0900, ITAGAKI Takahiro wrote:
You need to do so in 8.3:
    EXECUTE 'INSERT INTO ' || ctable ||
            ' VALUES( (' || quote_literal(new) || '::' || ptable || ').*)';

In 8.4, the next release, you will be able to use a better way:
    EXECUTE 'INSERT INTO ' || ctable || ' VALUES( ($1).* )' USING new;

Domo arigato, Itagaki-san. (I think I got that right.)

The 8.3 method works like a charm.

-Reece

 --  Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0 

Re: [PATCHES] Simplify formatting.c

Bruce Momjian wrote:
> > Actually it seems like the hard part is not so much the input
> > representation as the output representation --- what should the
> > base-level initcap routine return, to be reasonably efficient for
> > both cases?
>
> I hadn't gotten to trying it out yet, but I can see the output being a
> problem. You can't even really pre-allocate the storage before passing
> it because you don't know the length after case change. You could pass
> back a char* and repalloc to get the varlena header in there but that is
> very messy.
>
> Add to that that the multi-byte case also has to be converted to wide
> characters, so you have text -> char * -> wide chars -> char * -> text
> for the most complex case.
>
> I am starting to think that the simplest case is to keep the single-copy
> version in there for single-byte encodings and not worry about the
> overhead of the multi-byte case.

My new idea is if we pass the length to str_initcap, we can eliminate
the string copy from text to char *. That leaves us with just one extra
string copy from char * to text, which seems acceptable. We still have
the wide char copy but I don't see any easy way to eliminate that
because the multi-byte code is complex and not something we want to
duplicate.

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

[ANNOUNCE] == PostgreSQL Weekly News - June 22 2008 ==

== PostgreSQL Weekly News - June 22 2008 ==

New Survey: What TODO would you most like?
http://www.postgresql.org/community/

== PostgreSQL Jobs for June ==

http://archives.postgresql.org/pgsql-jobs/2008-06/threads.php

== PostgreSQL Local ==

TorontoPUG's first meeting will be June 23 at The Rhino.
http://pugs.postgresql.org/blog/159

OKPUG's first meeting will be June 23 at 7:00pm at Coach's in Norman, OK.
http://pugs.postgresql.org/node/408

PgDay.IT's planning meeting will be Wednesday, June 25 at 2130 CET via IRC.
irc://irc.freenode.net/pgday-it

pgDay Portland is July 20, just before OSCON.
http://pugs.postgresql.org/node/400

PGCon Brazil 2008 will be on September 26-27 at Unicamp in Campinas.
http://pgcon.postgresql.org.br/index.en.html

PGDay.IT 2008 will be October 17 and 18 in Prato.
http://www.pgday.org/it/

== PostgreSQL in the News ==

Planet PostgreSQL: http://www.planetpostgresql.org/

General Bits, Archives and occasional new articles:
http://www.varlena.com/GeneralBits/

PostgreSQL Weekly News is brought to you this week by David Fetter and
Josh Berkus.

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david@fetter.org, German language
to pwn@pgug.de, Italian language to pwn@itpug.org.

== Applied Patches ==

Tom Lane committed:

- Clean up a number of bogosities around pltcl's handling of the Tcl
"result": 1. Directly reading interp->result is deprecated in Tcl
8.0 and later; you're supposed to use Tcl_GetStringResult. This
code finally broke with Tcl 8.5, because Tcl_GetVar can now have
side-effects on interp->result even though it preserves the logical
state of the result. (There's arguably a Tcl issue here, because
Tcl_GetVar could invalidate the pointer result of a just-preceding
Tcl_GetStringResult, but I doubt the Tcl guys will see it as a bug.)
2. We were being sloppy about the encoding of the result: some
places would push database-encoding data into the Tcl result, which
should not happen, and we were assuming that any error result coming
back from Tcl was in the database encoding, which is not a good
assumption. 3. There were a lot of calls of Tcl_SetResult that
uselessly specified TCL_VOLATILE for constant strings. This is only
a minor performance issue, but I fixed it in passing since I had to
look at all the calls anyway. #2 is a live bug regardless of which
Tcl version you are interested in, so back-patch even to branches
that are unlikely to be used with Tcl 8.5. I went back as far as
8.0, which is as far as the patch applied easily; 7.4 was using a
different error processing scheme that has got its own problems :-(

- In pgsql/src/backend/optimizer/plan/setrefs.c, fix the code that
adds regclass constants to a plan's list of relation OIDs that it
depends on for replan-forcing purposes. We need to consider plain
OID constants too, because eval_const_expressions folds a
RelabelType atop a Const to just a Const. This change could result
in OID values that aren't really for tables getting added to the
dependency list, but the worst-case consequence would be occasional
useless replans. Per report from Gabriele Messineo.

- Clean up some problems with redundant cross-type arithmetic
operators. Add int2-and-int8 implementations of the basic
arithmetic operators +, -, *, /. This doesn't really add any new
functionality, but it avoids "operator is not unique" failures that
formerly occurred in these cases because the parser couldn't decide
whether to promote the int2 to int4 or int8. We could alternatively
have removed the existing cross-type operators, but experimentation
shows that the cost of an additional type coercion expression node
is noticeable compared to such cheap operators; so let's not give up
any performance here. On the other hand, I removed the
int2-and-int4 modulo (%) operators since they didn't seem as
important from a performance standpoint. Per a complaint last
January from ykhuang.

- In pgsql/src/backend/storage/ipc/sinvaladt.c, remove freeBackends
counter from the sinval shared memory area. We used to use it to
help enforce superuser_reserved_backends, but since 8.1 it's just
been dead weight.

- Improve error reporting for problems in text search configuration
files by installing an error context subroutine that will provide
the file name and line number for all errors detected while reading
a config file. Some of the reader routines were already doing that
in an ad-hoc way for errors detected directly in the reader, but it
didn't help for problems detected in subroutines, such as encoding
violations. Back-patch to 8.3 because 8.3 is where people will be
trying to debug configuration files.

- In pgsql/src/backend/utils/mb/mbutils.c, fix compiler warning
introduced by recent patch. Tsk tsk.

- Fix a few places that were non-multibyte-safe in tsearch
configuration file parsing. Per bug #4253 from Giorgio Valoti.

- Rewrite the sinval messaging mechanism to reduce contention and
avoid unnecessary cache resets. The major changes are: 1. When the
queue overflows, we only issue a cache reset to the specific backend
or backends that still haven't read the oldest message, rather than
resetting everyone as in the original coding. 2. When we observe
backend(s) falling well behind, we signal SIGUSR1 to only one
backend, the one that is furthest behind and doesn't already have a
signal outstanding for it. When it finishes catching up, it will in
turn signal SIGUSR1 to the next-furthest-back guy, if there is one
that is far enough behind to justify a signal. The
PMSIGNAL_WAKEN_CHILDREN mechanism is removed. 3. We don't attempt
to clean out dead messages after every message-receipt operation;
rather, we do it on the insertion side, and only when the queue
fullness passes certain thresholds. 4. Split SInvalLock into
SInvalReadLock and SInvalWriteLock so that readers don't block
writers nor vice versa (except during the infrequent queue cleanout
operations). 5. Transfer multiple sinval messages for each
acquisition of a read or write lock.

- In pgsql/src/backend/storage/ipc/sinvaladt.c, seems I was too
optimistic in supposing that sinval's maxMsgNum could be read and
written without a lock. The value itself is atomic, sure, but on
processors with weak memory ordering it's possible for a reader to
see the value change before it sees the associated message written
into the buffer array. Fix by introducing a spinlock that's used
just to read and write maxMsgNum. (We could do this with less
overhead if we recognized a concept of "memory access barrier"; is
it worth introducing such a thing? At the moment probably not --- I
can't measure any clear slowdown from adding the spinlock, so this
solution is probably fine.) Per buildfarm results.

Bruce Momjian committed:

- Move USE_WIDE_UPPER_LOWER define to c.h, and remove TS_USE_WIDE and
use USE_WIDE_UPPER_LOWER instead.

- In pgsql/src/backend/utils/mb/README, add URL for introduction to
multibyte programming in C.

- Move wchar2char() and char2wchar() from tsearch into /mb to be
easier to use for other modules; also move pnstrdup(). Clean up
code slightly.

- Add URL for TODO: "Allow pg_hba.conf to specify host names along
with IP addresses."

Neil Conway committed:

- In pgsql/doc/src/sgml/monitoring.sgml, fix a few typos in the DTrace
docs. Patch from Euler Taveira de Oliveira, along with an
additional typo I noticed along the way.

Alvaro Herrera committed:

- Improve our #include situation by moving pointer types away from the
corresponding struct definitions. This allows other headers to
avoid including certain highly-loaded headers such as rel.h and
relscan.h, instead using just relcache.h, heapam.h or genam.h, which
are more lightweight and thus cause less unnecessary dependencies.

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

Zoltan Boszormenyi sent in another revision of his POSIX fadvise
patch.

Simon Riggs sent in a patch to improve performance via hint bits for
write I/O.


---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

Re: [pgsql-www] archives.postgresql.org not responding

Tom Lane wrote:
> "Marc G. Fournier" <scrappy@hub.org> writes:
> > If it involves web infrastructure (ie. archives.postgresql.org), it
> > should go to pgsql-www ... if it involves non-web infrastructure
> > (ie. ftp) it should go to sysadmins ...
>
> Well, the point to me is that planned outages should be announced
> somewhere where non-admins can see them. Which service is involved
> shouldn't determine that.

Agreed. There should be a join-able list that will tell us about such
things. I can't join sysadmins to find out about outages, and www seems
to fit that role fine.

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

[pgsql-www] Mail VPS

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Is now live ... I've already sent one test through to pgsql-core and it went
through fine, this is a 'second test' ...

now, right now, postgresql.org points to developer.postgresql.org, which, for
the web, redirects to www.postgresql.org ... this means that anyone reading
imap that has their pop3/imap host set as postgresql.org, instead of
mail.postgresql.org, will need to change their mail readers ...

I have both mail and imap shut down on postgresql.org at this time ... if there
are no reports of issues by tomorrow evening, I'm going to make a backup of
developer.postgresql.org *as it is*, and then start removing everything that
was moved ot the seperate VPS (majordomo & email) to clean off the old VPS of
the 'extras' ...

Stefan has access to the new VPS to setup munin/nagios monitoring ... I haven't
setup any extra accounts at this time .. stefan, can I leave setting that up
with you also? Or, can someone send me a list of who all needs access, and I
can add them on?

I'm installinig Apache2.2 over there now so that the majordomo admin interfaces
work ... knew I'd forgotten something while setting things up ...

- --
Marc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)
Email . scrappy@hub.org MSN . scrappy@hub.org
Yahoo . yscrappy Skype: hub.org ICQ . 7615664
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkhe/hAACgkQ4QvfyHIvDvO7vACgipXsZtNdnMDkd9IizhqL5FCi
BH4AoOnhD/A6BSN4LmWtD89zhUNbepG0
=z2YJ
-----END PGP SIGNATURE-----


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

[pgsql-advocacy] Talking at New York PHP meeting

FYI, I will be presenting at this Tuesday evening's New York PHP
meeting. I will be mostly talking about EnterpriseDB's Postgres Plus:

http://www.nyphp.org/index.php

--
Bruce Momjian <bruce@momjian.us>

http://momjian.us

EnterpriseDB

http://enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [GENERAL] partitioned table insert triggers

Reece Hart <reece@harts.net> wrote:

> c := 'INSERT INTO ' || ctable || ' VALUES (NEW.*)';

> Now that my only idea is gone, the question for y'all is: how?
>
> ?(Okay, I actually do have one more idea: construct the insert values
> list with quote_literal. I'm hoping there's a better way.)

You need to do so in 8.3:
EXECUTE 'INSERT INTO ' || ctable ||
' VALUES( (' || quote_literal(new) || '::' || ptable || ').*)';

In 8.4, the next release, you will be able to use a better way:
EXECUTE 'INSERT INTO ' || ctable || ' VALUES( ($1).* )' USING new;

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] ...Roll Back issue in PGSQL..

On Friday 20 June 2008 12:19:03 Joshua D. Drake wrote:
> On Fri, 2008-06-20 at 13:10 +0530, Deepak wrote:
> > Hi
> >
> > I am trying to create a TRIGGER function,that populates values to
> > another table upon inserting a value into a table. The issue is,if
> > there is a exception(in Table 2) everything gets rolled back including
> > Table1.I dont want this to happen.i want the value of table1 to stay.
> > I tried "try....except" and introduced SAVEPOINT.But resulted in
> > runtime error.
> >
> > Can anyone please help me out
>
> The function executed by the trigger will be executed as a single
> transaction. If any part fails, they all fail.
>

Well, wrapping the bits of table2 in a begin....exception block would allow
him to do what he wants.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] De Oracle Forms y PL/PSQL a PostgreSQL

2008/6/22 Luis D. García <ldgarc@gmail.com>:
> Hola listeros.
>
> En la empresa donde laboro me han pedido que investigue acerca de las
> posibilidades de migrar BDs en Oracle que hacen uso de Oracle Forms y
> funciones desarrolladas en pl/sql a PostgreSQL.
[...]
> Ahora, revisando entre los archivos de la lista vi que existe un script
> llamado Ora2Pg que permite migrar BDs de Oracle a PostgreSQL. Esta
> herramienta será suficiente para poder migrar estos elementos de Bds de
> Oracle a Postgre?

Oracle Forms es una herramienta de desarrollo visual (y las formas no
son objetos de la base de datos)

Ademas no se si Ora2Pg pueda migrar las funciones en pl/sql, aunque
plpgsql es mayomente compatible con el pl/sql de oracle...

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 9: visita nuestro canal de IRC #postgresql-es en irc.freenode.net

Re: [pdxpug] Meeting recap - Logic and Databases with Jeff Davis

On Sun, 2008-06-22 at 11:31 -0700, Randal L. Schwartz wrote:
> I'd argue that this is also wrong. You should not include "unknown" in your
> sum, even with this trick. Just filter it out with WHERE where you can,
> and rely on this "trick" only when filtering the rows is not an option.
>

In SQL, NULL doesn't always mean "unknown". See: OUTER JOIN and any
aggregate function (except COUNT).

I provided a link to the slides for my talk, and in there is a complete
example, in which we start out with no NULLs at all in our data, and yet
still run into precisely this problem. In that particular example, a
WHERE will *not* solve the problem, because aggregates (other than
COUNT) return NULL when there are no input rows (which I'd like to point
out is not an "unknown").

http://www.pgcon.org/2008/schedule/events/83.en.html

Regards,
Jeff Davis


--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

[pgsql-es-ayuda] De Oracle Forms y PL/PSQL a PostgreSQL

Hola listeros.

En la empresa donde laboro me han pedido que investigue acerca de las posibilidades de migrar BDs en Oracle que hacen uso de Oracle Forms y funciones desarrolladas en pl/sql a PostgreSQL. En realidad no sé mucho de esto, hasta ahora no he tenido contacto con BDs en Oracle y recién fue que pude revisar algo acerca de los Oracle Forms (http://en.wikipedia.org/wiki/Oracle_Forms).

Ahora, revisando entre los archivos de la lista vi que existe un script llamado Ora2Pg que permite migrar BDs de Oracle a PostgreSQL. Esta herramienta será suficiente para poder migrar estos elementos de Bds de Oracle a Postgre? Como les digo, es poco lo que he revisado hasta ahora, pero de seguro con algunas recomendaciones de aquellos que hayan tenido que lidiar con este tipo de situaciones, podré estar un poco más claro en mi investigación.

Gracias a todos de antemano. Saludos, Luis.

--
Luis D. García M.

Telf: (+58) 2418662663
Cel.: (+58) 4143482018

[pgadmin-support] Backup/Restore problem (Windows) - newbie

I have looked at prior messages on this topic, but haven't been able to get
my restore to work.

I am doing a backup on my client's server (I've tried, PLAIN, COMPRESSED,
SCHEMA/DATA only).

I then copy the file to my PC and want to restore here. I've tried the
restore in PGAdmin, but the restore option is disabled. Or I try to restore
the PLAIN versions, and again it is disabled. If I rename the .backup to
.sql, I can load the SQL script, but it fails immediately.

I guess my questions are....

1) Obviously, what are the steps to do a successful restore on a different
PC?

2) If I restore over my existing (old) DB, do I need to empty the tables, or
delete and recreate the DB first?

I have PG Admin 1.8.2 at the client site, and 1.8.4 here. Both sites are
using PGSQL 8.3.0.

Thanks,
Rocky
--
View this message in context: http://www.nabble.com/Backup-Restore-problem-%28Windows%29---newbie-tp18060565p18060565.html
Sent from the PostgreSQL - pgadmin support mailing list archive at Nabble.com.


--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [BUGS] Abnormal termination of PostgreSQL

On Sun, Jun 22, 2008 at 06:47:14PM -0400, Tom Lane wrote:
> Hmm. Unless you have reason to think that your hardware is flaky,
> I dislike writing this off as "just a glitch". I think most likely
> you got bit by a bug somewhere. Still, if we can't reproduce it
> it's gonna be mighty hard to find.
>
> If you have the time and interest, it might be worth repeating the
> whole setup sequence starting from initdb. I'm speculating that
> installing PostGIS might have somehow left corruption in server memory
> that later manifested as the visible problem.
>

I'm going to have a lot riding on this project if it flies, so if you think
there might something lurking in the background, I'll wipe the database and
rebuild it and we'll see. The data load is all scripted anyway, so it
shouldn't take more than an hour or so. I'll let you know either way
tomorrow morning.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[pgadmin-hackers] Options request...

... Per Help Doc...

Maybe this is there and I just can't find it but I would like to be able
to designate a default directory for loading and saving scripts in the
query tool.

Thanks!

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [BUGS] Abnormal termination of PostgreSQL

Bill Thoen <bthoen@gisnet.com> writes:
> On Sun, Jun 22, 2008 at 02:17:47PM -0400, Tom Lane wrote:
>> Can you reproduce the problem if you import the same data into a new
>> table?

> Arrg! Looks like the first load was corrupted as you suspected.. When I
> tried it a second time, it seems to have worked just fine.

Hmm. Unless you have reason to think that your hardware is flaky,
I dislike writing this off as "just a glitch". I think most likely
you got bit by a bug somewhere. Still, if we can't reproduce it
it's gonna be mighty hard to find.

If you have the time and interest, it might be worth repeating the
whole setup sequence starting from initdb. I'm speculating that
installing PostGIS might have somehow left corruption in server memory
that later manifested as the visible problem.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [pgsql-www] archives.postgresql.org not responding

"Marc G. Fournier" <scrappy@hub.org> writes:
> If it involves web infrastructure (ie. archives.postgresql.org), it
> should go to pgsql-www ... if it involves non-web infrastructure
> (ie. ftp) it should go to sysadmins ...

Well, the point to me is that planned outages should be announced
somewhere where non-admins can see them. Which service is involved
shouldn't determine that.

regards, tom lane

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-www] archives.postgresql.org not responding

Marc G. Fournier wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
> If it involves web infrastructure (ie. archives.postgresql.org), it should go
> to pgsql-www ... if it involves non-web infrastructure (ie. ftp) it should go
> to sysadmins ...

Yeah, like I said, next time I will just email both. There are too many
"other" facets to what you just said.

Joshua D. Drake

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [BUGS] Abnormal termination of PostgreSQL

On Sun, Jun 22, 2008 at 02:17:47PM -0400, Tom Lane wrote:
> Hmm ... interesting but not conclusive. The most likely explanation
> for this is corrupt data on-disk, but how it got that way is unclear.
> Can you reproduce the problem if you import the same data into a new
> table?

Arrg! Looks like the first load was corrupted as you suspected.. When I
tried it a second time, it seems to have worked just fine.

Well, at least I learned how to use GDB, so the time spent wasn't a total
waste. I noticed searching through Google for postgres error messages like
the one I had that one of the main things you have to teach people is how
to use GDB. ;-)

Thanks, Tom! I really appreciate your help.

- Bill Thoen

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [pgsql-www] archives.postgresql.org not responding

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


If it involves web infrastructure (ie. archives.postgresql.org), it should go
to pgsql-www ... if it involves non-web infrastructure (ie. ftp) it should go
to sysadmins ...


- --On Sunday, June 22, 2008 17:43:13 -0400 Robert Treat
<xzilla@users.sourceforge.net> wrote:

> On Sunday 22 June 2008 02:45:07 Tom Lane wrote:
>> "Joshua D. Drake" <jd@commandprompt.com> writes:
>> > Stefan Kaltenbrunner wrote:
>> >> Tom Lane wrote:
>> >>> I've been seeing connection timeouts for the last little while ...
>> >>
>> >> AFAIK - currently there is maintainance work going on at the CMD hosting
>> >> facility besides archives also search,planet and the buildfarm are down
>> >> currently ...
>> >
>> > Longer than expected, but everything should be up but buildfarm. I am
>> > out of caffeine at this point but will look at buildfarm first thing in
>> > the morning.
>>
>> Fully understand about lack of caffeine --- but if this was a planned
>> service outage, some advance notice to pgsql-www would've been
>> appropriate, no? Or did I just miss it?
>>
>
> He sent notice to sysadmins, but not pgsql-www. I'm not sure what the policy
> is, but seems like it might be good to send these outage notices to a more
> public list as well. (For example, there is currently a notice from Marc
> about doing maintanance on mail.postgresql.org which I believe was supposed
> to happen last night, though we haven't recieved a followup mail yet)
>
> --
> Robert Treat
> Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL
>
> --
> Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-www

- --
Marc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)
Email . scrappy@hub.org MSN . scrappy@hub.org
Yahoo . yscrappy Skype: hub.org ICQ . 7615664
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkhezKEACgkQ4QvfyHIvDvMovgCfSVhDZUHJNvhUcvJnppUy265P
iBoAn0WbfwLgXe13PlPb50Z9esjBHDJu
=th8R
-----END PGP SIGNATURE-----


--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-www] archives.postgresql.org not responding

On Sunday 22 June 2008 02:45:07 Tom Lane wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
> > Stefan Kaltenbrunner wrote:
> >> Tom Lane wrote:
> >>> I've been seeing connection timeouts for the last little while ...
> >>
> >> AFAIK - currently there is maintainance work going on at the CMD hosting
> >> facility besides archives also search,planet and the buildfarm are down
> >> currently ...
> >
> > Longer than expected, but everything should be up but buildfarm. I am
> > out of caffeine at this point but will look at buildfarm first thing in
> > the morning.
>
> Fully understand about lack of caffeine --- but if this was a planned
> service outage, some advance notice to pgsql-www would've been
> appropriate, no? Or did I just miss it?
>

He sent notice to sysadmins, but not pgsql-www. I'm not sure what the policy
is, but seems like it might be good to send these outage notices to a more
public list as well. (For example, there is currently a notice from Marc
about doing maintanance on mail.postgresql.org which I believe was supposed
to happen last night, though we haven't recieved a followup mail yet)

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [BUGS] Abnormal termination of PostgreSQL

On Sun, Jun 22, 2008 at 02:17:47PM -0400, Tom Lane wrote:
> Hmm ... interesting but not conclusive. The most likely explanation
> for this is corrupt data on-disk, but how it got that way is unclear.
> Can you reproduce the problem if you import the same data into a new
> table?

I'll try that and let you know.

> Can we see the exact table declaration? I'm wondering if the table has
> any columns of datatypes that are defined by the add-on modules.

Sure. It's just char(), integer and double precision; nothing fancy. Here
it is:


drop table corn0715 ;
create table corn0715 (
record_type char(2),
reporting_org char(2),
fips_st_cd char(2),
company char(3),
policy_num char(7),
crop_year char(4),
crop_cd char(4),
insurance_plan_cd char(2),
fips_cou_cd char(3),
unit_num char(5),
type_cd char(3),
practice_cd char(3),
coverage_flag char(1),
record_number char(3),
t_yield_map_area char(3),
ncs_yield_factor double precision,
written_agreement_type char(2),
written_agreement_num char(8),
written_agreement_proc_flag char(2),
yield_indicator char(2),
transitional_yield double precision,
fsa_yield double precision,
aqpproved_yield double precision,
prev_approved_yield double precision,
yield_year_1 char(4),
yield_type_1 char(2),
annual_yield_1 double precision,
yield_acres_1 double precision,
revenue_yield_code_1 integer,
yield_year_2 char(4),
yield_type_2 char(2),
annual_yield_2 double precision,
yield_acres_2 double precision,
revenue_yield_code_2 integer,
yield_year_3 char(4),
yield_type_3 char(2),
annual_yield_3 double precision,
yield_acres_3 double precision,
revenue_yield_code_3 integer,
yield_year_4 char(4),
yield_type_4 char(2),
annual_yield_4 double precision,
yield_acres_4 double precision,
revenue_yield_code_4 integer,
yield_year_5 char(4),
yield_type_5 char(2),
annual_yield_5 double precision,
yield_acres_5 double precision,
revenue_yield_code_5 integer,
yield_year_6 char(4),
yield_type_6 char(2),
annual_yield_6 double precision,
yield_acres_6 double precision,
revenue_yield_code_6 integer,
yield_year_7 char(4),
yield_type_7 char(2),
annual_yield_7 double precision,
yield_acres_7 double precision,
revenue_yield_code_7 integer,
yield_year_8 char(4),
yield_type_8 char(2),
annual_yield_8 double precision,
yield_acres_8 double precision,
revenue_yield_code_8 integer,
yield_year_9 char(4),
yield_type_9 char(2),
annual_yield_9 double precision,
yield_acres_9 double precision,
revenue_yield_code_9 integer,
yield_year_10 char(4),
yield_type_10 char(2),
annual_yield_10 double precision,
yield_acres_10 double precision,
revenue_yield_code_10 integer,
rate_st char(2),
rate_cou char(3),
farm_nbr char(7),
yield_limitation_flag char(2),
excessive_yield_edit_bypass char(1),
year_with_yield_refs integer,
applicable_option_codes char(16),
rate_yield double precision,
average_yield double precision,
prev_yield_limitation_flag char(2),
yield_index char(5),
perennial_year_of_set_out char(6),
perennial_leaf_year char(2),
perennial_density integer,
perennial_block_num integer,
perennial_transitional_yield_factor double precision,
perennial_special_cases char(3),
perennial_other char(1),
perennial_year_grafting char(6),
year_1_skip_row_code char(5),
year_2_skip_row_code char(5),
year_3_skip_row_code char(5),
year_4_skip_row_code char(5),
year_5_skip_row_code char(5),
year_6_skip_row_code char(5),
year_7_skip_row_code char(5),
year_8_skip_row_code char(5),
year_9_skip_row_code char(5),
year_10_skip_row_code char(5)
);


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [pgsql-www] archives.postgresql.org not responding

Tom Lane wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
>> Stefan Kaltenbrunner wrote:
>>> Tom Lane wrote:

> Fully understand about lack of caffeine --- but if this was a planned
> service outage, some advance notice to pgsql-www would've been
> appropriate, no? Or did I just miss it?

I sent it to sysadmins, but your point is taken. I will send to both in
the future.

Joshua D. Drake

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [JDBC] Query size?

Paul Tomblin <ptomblin@gmail.com> writes:
> What I'm asking though is if there is some way to know before I start
> returning results, short of doing a "COUNT(*)" first.

No. Postgres generates query results on the fly, so the server doesn't
know the number of rows that will be returned either, until the query
completes.

You can get an estimate by running EXPLAIN, but those estimates are
frequently far off the mark.

If you're really intent on having an accurate count before you fetch
the results, you can set up the query as a scrollable cursor, do MOVE
FORWARD ALL and note the move count, then MOVE BACKWARD ALL and start
fetching. This amounts to forcing the server to materialize the whole
result set before you start to fetch it, which is exceedingly expensive,
especially if the result set is very large.

Usually the best bet is to just limit how much you will fetch, using
LIMIT or a cursor as suggested by imad.

regards, tom lane

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [BUGS] Abnormal termination of PostgreSQL

Tom Lane wrote:

> This seems pretty suspicious to me. gcc 4.3 shouldn't have resulted in
> any major changes in system header layout. What I am wondering is if
> GEOS and GDAL have any direct dependencies on Postgres, and if so
> whether they've been updated to work with 8.3.

FWIW I compile with gcc 4.3 without any problem, and there are buildfarm
members that do so as well. Not these add-ons though.

--
Alvaro Herrera

http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] Backend Stats Enhancement Request

Thomas Lee wrote:
> Thanks for the feedback Tom. An initial patch for this has been posted
> to pgsql-patches.

Thanks for the patch. I have added it to the current Commitfest wiki
page. If you plan to continue sending patches, please make sure you get
an account to do that yourself. Thanks.

http://wiki.postgresql.org/wiki/CommitFest:2008-07

--
Alvaro Herrera

http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pdxpug] Meeting recap - Logic and Databases with Jeff Davis

>>>>> "David" == David E Wheeler <david@kineticode.com> writes:

David> try=# select sum(coalesce(column1, 0)) FROM (values (1), (NULL)) t;
David> sum
David> -----
David> 1
David> (1 row)

David> Even if the above example works, I'd certainly recommend this version (unless
David> column1 is NOT NULL).

I'd argue that this is also wrong. You should not include "unknown" in your
sum, even with this trick. Just filter it out with WHERE where you can,
and rely on this "trick" only when filtering the rows is not an option.

If NULL should mean 0 for sums, then it should be a 0, not a NULL, in the
table.

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.vox.com/ for Smalltalk and Seaside discussion

--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

Re: [GENERAL] Importing undelimited files (Flat Files or Fixed-Length records)

In article <485AE3AA.3030700@gisnet.com>, Bill Thoen <bthoen@gisnet.com> wrote:
% I've got to load some large fixed-legnth ASCII records into PG and I was
% wondering how this is done. The Copy command looks like it works only
% with delimited files, and I would hate to have to convert these files to
% INSERT-type SQL to run them through psql.. Is there a way one can
% specify a table structure with raw field widths and then just pass it a
% flat file?

pg_loader is supposed to handle this.

http://pgfoundry.org/projects/pgloader

--

Patrick TJ McPhee
North York Canada
ptjm@interlog.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [BUGS] Abnormal termination of PostgreSQL

Bill Thoen <bthoen@gisnet.com> writes:
> Is this what you need to see? (I attached GDB to the running postgres
> process and let it continue until it crashed, and then entered 'bt' and
> this is what I got.)

Hmm ... interesting but not conclusive. The most likely explanation
for this is corrupt data on-disk, but how it got that way is unclear.
Can you reproduce the problem if you import the same data into a new
table?

> Here's the backstory. The process that's crashing is an SQL update
> query on the table (corn0715). There are 104 fields in the table and the
> raw record length is about 648 characters. I have just imported it as a
> delimited file with COPY and there are about 1.7 million records.

Can we see the exact table declaration? I'm wondering if the table has
any columns of datatypes that are defined by the add-on modules.

> I compiled PostgreSQL 8.3.3, PostGIS 1.3.3, GEOS 3.0.0, Proj4 4.6.0 and
> GDAL 1.5.2 from source using gcc 4.3.0 on a newly installed Fedora Core
> 9 system. All software is the latest version and the disc drive was
> wiped clean before I started, so there are no old libraries or
> configurations lurking in the background.

> I did have some issues compiling GEOS, PostGIS and GDAL. Apparently,
> there have been some changes in gcc, particularly in the include files,
> and I was getting errors like 'memcpy' not declared, and the like. I
> fixed that (I think) with a diff patch, and everything seemed to compile
> and install OK.

This seems pretty suspicious to me. gcc 4.3 shouldn't have resulted in
any major changes in system header layout. What I am wondering is if
GEOS and GDAL have any direct dependencies on Postgres, and if so
whether they've been updated to work with 8.3.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] Backend Stats Enhancement Request

Thanks for the feedback Tom. An initial patch for this has been posted
to pgsql-patches.

Cheers,
T

Tom Lane wrote:
> Thomas Lee <tom@vector-seven.com> writes:
>
>> How does this sound:
>>
>
>
>> * A new GUC variable -- "activity_message_size" -- will be introduced
>>
>
> Well, "message" doesn't seem quite le mot juste to me for a column that
> is displaying a SQL command. Usually we'd use "statement", "command",
> or "query" to refer to one of those things. Since the relevant column
> of pg_stat_activity is already named "current_query", perhaps the
> best choice is "activity_query_size". Or "activity_query_length"?
>
> Another consideration is that it might be a good idea to name it to
> be obviously related to the controlling "track_activities" boolean.
> That would lead to "track_activity_query_size", or
> "track_activity_max_length", or some such.
>
>
>> * Minimum value of PGBE_DEFAULT_ACTIVITY_SIZE, maximum value of INT_MAX?
>>
>
> I was thinking about a range of 100 to 100K or thereabouts. INT_MAX
> is just silly...
>
>
>> I'm struggling a little to come up with a decent description of the GUC
>> variable -- something along the lines of "Sets the maximum length of
>> backend status messages". Any suggestions?
>>
>
> Be specific:
> "Sets the maximum length of pg_stat_activity.current_query."
>
>
>> Also: how should we allocate the memory for PgBackendStatus.st_activity?
>> I'm guessing it's going to be necessary to keep this in shmem ...
>>
>
> Yup. Look at existing variable-size shmem allocations.
> max_prepared_transactions might be a good reference, since it's not
> used in very many places.
>
> regards, tom lane
>
>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[PATCHES] A GUC variable to replace PGBE_ACTIVITY_SIZE

diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index f178fe3..912c1cf 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -101,6 +101,7 @@
bool pgstat_track_activities = false;
bool pgstat_track_counts = false;
int pgstat_track_functions = TRACK_FUNC_OFF;
+int pgstat_track_activity_query_size = PGBE_DEFAULT_ACTIVITY_SIZE;

/*
* BgWriter global statistics counters (unused in other processes).
@@ -2010,6 +2011,7 @@ pgstat_fetch_global(void)

static PgBackendStatus *BackendStatusArray = NULL;
static PgBackendStatus *MyBEEntry = NULL;
+static char* BackendActivityBuffer = NULL;


/*
@@ -2025,7 +2027,26 @@ BackendStatusShmemSize(void)
}

/*
- * Initialize the shared status array during postmaster startup.
+ * Ensures that every element of BackendStatusArray has a valid st_activity
+ * pointer.
+ */
+static void
+pgstat_initialize_activity_pointers(void)
+{
+ Size i;
+ PgBackendStatus* beentry = BackendStatusArray;
+ char* buffer = BackendActivityBuffer;
+
+ for (i = 0; i < MaxBackends; i++) {
+ beentry->st_activity = buffer;
+ buffer += pgstat_track_activity_query_size;
+ beentry++;
+ }
+}
+
+/*
+ * Initialize the shared status array & activity buffer during postmaster
+ * startup.
*/
void
CreateSharedBackendStatus(void)
@@ -2044,6 +2065,17 @@ CreateSharedBackendStatus(void)
*/
MemSet(BackendStatusArray, 0, size);
}
+
+ size = mul_size(pgstat_track_activity_query_size, MaxBackends);
+ BackendActivityBuffer = (char*)
+ ShmemInitStruct("Backend Activity Buffer", size, &found);
+
+ if (!found)
+ {
+ MemSet(BackendActivityBuffer, 0, size);
+ }
+
+ pgstat_initialize_activity_pointers();
}


@@ -2128,7 +2160,7 @@ pgstat_bestart(void)
beentry->st_waiting = false;
beentry->st_activity[0] = '\0';
/* Also make sure the last byte in the string area is always 0 */
- beentry->st_activity[PGBE_ACTIVITY_SIZE - 1] = '\0';
+ beentry->st_activity[pgstat_track_activity_query_size - 1] = '\0';

beentry->st_changecount++;
Assert((beentry->st_changecount & 1) == 0);
@@ -2188,7 +2220,7 @@ pgstat_report_activity(const char *cmd_str)
start_timestamp = GetCurrentStatementStartTimestamp();

len = strlen(cmd_str);
- len = pg_mbcliplen(cmd_str, len, PGBE_ACTIVITY_SIZE - 1);
+ len = pg_mbcliplen(cmd_str, len, pgstat_track_activity_query_size - 1);

/*
* Update my status entry, following the protocol of bumping
@@ -2267,6 +2299,7 @@ pgstat_read_current_status(void)
volatile PgBackendStatus *beentry;
PgBackendStatus *localtable;
PgBackendStatus *localentry;
+ char *localactivity;
int i;

Assert(!pgStatRunningInCollector);
@@ -2278,6 +2311,9 @@ pgstat_read_current_status(void)
localtable = (PgBackendStatus *)
MemoryContextAlloc(pgStatLocalContext,
sizeof(PgBackendStatus) * MaxBackends);
+ localactivity = (char *)
+ MemoryContextAlloc(pgStatLocalContext,
+ pgstat_track_activity_query_size * MaxBackends);
localNumBackends = 0;

beentry = BackendStatusArray;
@@ -2296,10 +2332,14 @@ pgstat_read_current_status(void)
int save_changecount = beentry->st_changecount;

/*
- * XXX if PGBE_ACTIVITY_SIZE is really large, it might be best to
- * use strcpy not memcpy for copying the activity string?
+ * XXX if pgstat_track_activity_query_size is really large,
+ * it might be best to use strcpy not memcpy for copying the
+ * activity string?
*/
memcpy(localentry, (char *) beentry, sizeof(PgBackendStatus));
+ memcpy(localactivity, (char *) beentry->st_activity,
+ pgstat_track_activity_query_size);
+ localentry->st_activity = localactivity;

if (save_changecount == beentry->st_changecount &&
(save_changecount & 1) == 0)
@@ -2314,9 +2354,10 @@ pgstat_read_current_status(void)
if (localentry->st_procpid > 0)
{
localentry++;
+ localactivity += pgstat_track_activity_query_size;
localNumBackends++;
}
- }
+ }

/* Set the pointer only after completion of a valid table */
localBackendStatusTable = localtable;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fa96437..b7f1302 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1311,6 +1311,15 @@ static struct config_int ConfigureNamesInt[] =
},

{
+ {"track_activity_query_size", PGC_POSTMASTER, RESOURCES_MEM,
+ gettext_noop("Sets the maximum number of characters that will be displayed for pg_stat_activity.current_query."),
+ NULL,
+ },
+ &pgstat_track_activity_query_size,
+ PGBE_DEFAULT_ACTIVITY_SIZE, 100, 102400, NULL, NULL
+ },
+
+ {
{"temp_buffers", PGC_USERSET, RESOURCES_MEM,
gettext_noop("Sets the maximum number of temporary buffers used by each session."),
NULL,
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index a9f5501..03b9465 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -509,8 +509,8 @@ typedef struct PgStat_GlobalStats
* ----------
*/

-/* Max length of st_activity string ... perhaps replace with a GUC var? */
-#define PGBE_ACTIVITY_SIZE 1024
+/* Default length of st_activity string (see backend_activity_size GUC) */
+#define PGBE_DEFAULT_ACTIVITY_SIZE 1024

/* ----------
* PgBackendStatus
@@ -551,7 +551,7 @@ typedef struct PgBackendStatus
bool st_waiting;

/* current command string; MUST be null-terminated */
- char st_activity[PGBE_ACTIVITY_SIZE];
+ char *st_activity;
} PgBackendStatus;

/*
@@ -578,6 +578,7 @@ typedef struct PgStat_FunctionCallUsage
extern bool pgstat_track_activities;
extern bool pgstat_track_counts;
extern int pgstat_track_functions;
+extern int pgstat_track_activity_query_size;

/*
* BgWriter statistics counters are updated directly by bgwriter and bufmgr
Attached is a patch providing a new GUC variable called
"track_activity_query_size", as previously discussed on pgsql-hackers here:

http://archives.postgresql.org/pgsql-hackers/2008-06/msg00814.php

This is my first patch for postgres, so I'm sure it may need some love.
In particular:

* Should it be possible to set this new variable via a command-line
option ala shared_buffers?
* I'm not exactly sure about the best way to add unit/regr tests for
this change.
* I'm also not sure what's required in the way of updates to the
documentation.

Any comments or suggestions would be most appreciated.

Cheers,
T

Re: [pdxpug] Meeting recap - Logic and Databases with Jeff Davis

On Sat, 2008-06-21 at 15:29 -0700, David E. Wheeler wrote:
> Congrats on the new job, Jeff!
>

Thanks!

> I think it's more useful, but if you're basing the schema on an
> existing terminology used by your purchasing department, you can't go
> too far wrong.
>

I disagree. Using status codes has several disadvantages:

(1) They tie you to a specific business process. A business process is
just an algorithm implemented by people, and therefore the status codes
they use are just implementation details of an imperative process.

(2) States can be mutually exclusive, overlapping, or one state can
imply another. This is just confusing.

(3) They inherently require branching, which is complicated from a
logical standpoint. In SQL terms, this means using the CASE statement
more often, and requiring exclusions and other special treatment in
queries.

(4) It also means that any almost-correct query will provide an answer
that *looks* correct, even if it has subtle bugs because some state
should have been excluded, but wasn't.

Compare all those problems with something simple like just having the
tables: approval, purchase, and shipment_arrival. In that case, it's
simple to draw arbitrary implications from the data without awkward
exclusions and CASE statements (or any of the problems above).

Someone querying the data only needs to understand the predicates of the
relations, they don't need to understand what the states mean, nor do
they need to work backward from the meaning of the states to some kind
of query that properly handles the various states.

The problems with states stem from the fact that relational expressions
(and SQL) are declarative, but states imply some kind of state machine.
The states pull us away from declarative language and force us into
imperative language.

Maybe I should write a blog entry about this.

> Excellent points. I've never been anywhere where we had a data
> dictionary. Still, "pending" isn't too bad -- especially if you've
> ever looked at schemas in commercial products, where there can
> sometimes seem to be a use of security through obscurity -- that is,
> some vendors seem to try to protect their IP by giving their database
> tables and columns completely meaningless names.

I would argue that your data dictionary should match your relations. If
you have useful definitions in your data dictionary, why not make the
relations match, so that you can easily make logical inferences using
the relational operators?

Even a CSV file can have a data dictionary. I don't think a data
dictionary is a justification for a weakness in a database design
(although it's certainly better than nothing).

> Boy, that sure seems like a bug. This should be legal, though:

It's not just a bug, it's a standardized bug.

SQL is confusing because it uses NULLs in at least two senses:
(1) Unknown. This is a value: the third truth value. Operators,
functions, and IN all think this is what NULL means.
(2) Nothingness. This is not a value. Aggregates and outer joins think
this is what NULL means.

COALESCE can obviously help you switch between those two senses of NULL,
but why should you have to? Why should you get a result that *looks*
correct from a query that *looks* correct when it's actually wrong?

My talk slides have a full example that illustrates such a query:
http://www.pgcon.org/2008/schedule/events/83.en.html

Any language can do anything, so I don't think of a workaround like
COALESCE as a justification for the bad standard behavior.

> Your explanation makes perfect sense. It's just that the first case
> seems wrong (to me, at least).

It is wrong -- or at least horribly inconsistent. If SQL really wanted
to have both unknown and nothingness, it should have called them two
separate things.

> Yeah. COALESCE() is your friend. This is also why I try to make
> columns NOT NULL as often as possible. NULLs are pretty evil.

I'll take this opportunity to point out that COALESCE has similar
problems to using states: it requires special cases and the result
always looks right even when it's wrong.

Compare to nil in Ruby. Every operator in SQL is defined for NULL input
in SQL, but virtually no operators are defined for nil input in Ruby.
This means that a wrong handling of the special value nil in Ruby will
almost always result in an error, but in SQL will produce a
correct-looking result.

Of course, using special cases (like nil) is still imperative
programming, but at least it's slightly less error prone.

When NULL is used in two very different senses like that in SQL, it
effectively makes it an untyped system like assembly.

Regards,
Jeff Davis


--
Sent via pdxpug mailing list (pdxpug@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pdxpug

[COMMITTERS] npgsql - Npgsql2: Added tests for parameter casts.

Log Message:
-----------

Added tests for parameter casts.

Modified Files:
--------------
Npgsql2/testsuite/noninteractive/NUnit20:
CommandTests.cs (r1.11 -> r1.12)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/testsuite/noninteractive/NUnit20/CommandTests.cs.diff?r1=1.11&r2=1.12)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: [#1003400] Later type binding for DbType.Object

Log Message:
-----------


[#1003400] Later type binding for DbType.Object

Fixed problems with explicit parameter datatype casting. Now, whenever as parameter has a dbtype.object data type, the cast will be turned off. This will revert the flexibility of letting the type specification to postgresql itself.

If a user wants to turn cast on, just specify a dbtype value other than object or specify an NpgsqlDbType. See: http://pgfoundry.org/forum/message.php?msg_id=1003583, http://pgfoundry.org/forum/message.php?msg_id=1003620 and http://pgfoundry.org/tracker/index.php?func=detail&aid=1003400&group_id=1000140&atid=592 for discussions about that. Also, DbType.Object is mapped in NpgsqlTypesHelper as an alias to text datatype on internal mapping.

Thanks Waldemar Bergstreiser and Andrus Moor for discussions about that.

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlCommand.cs (r1.22 -> r1.23)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlCommand.cs.diff?r1=1.22&r2=1.23)
NpgsqlParameter.cs (r1.14 -> r1.15)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlParameter.cs.diff?r1=1.14&r2=1.15)
Npgsql2/src/NpgsqlTypes:
NpgsqlTypesHelper.cs (r1.15 -> r1.16)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/NpgsqlTypes/NpgsqlTypesHelper.cs.diff?r1=1.15&r2=1.16)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[BUGS] Abnormal termination of PostgreSQL

Tom Lane wrote:

> Bill Thoen <bthoen@gisnet.com> writes:
>
>> Does anyone know what happened and how I can fix it?
>>
> Well, you evidently hit a bug, but there's not nearly enough info here
> to guess whether the bug is in Postgres, PostGIS, GEOS, or GDAL, or
> perhaps in your custom build process for one of these.
>
> ...
>
> Then send the trace to pgsql-bugs. You'll need to provide some details
> about the table and the query, too.
>
Is this what you need to see? (I attached GDB to the running postgres
process and let it continue until it crashed, and then entered 'bt' and
this is what I got.)

Program received signal SIGSEGV, Segmentation fault.
0x08081357 in slot_deform_tuple (slot=0x92d3618, natts=4) at
heaptuple.c:1502
1502 off = att_align_pointer(off,
thisatt->attalign, -1,
(gdb) bt
#0 0x08081357 in slot_deform_tuple (slot=0x92d3618, natts=4) at
heaptuple.c:1502
#1 0x08081739 in slot_getattr (slot=0x92d3618, attnum=4,
isnull=0x92dd753 "") at heaptuple.c:1625
#2 0x08168525 in ExecProject (projInfo=0x92dd470, isDone=0xbfed09e8) at
execQual.c:4601
#3 0x0816e5f6 in ExecScan (node=0x92d3798, accessMtd=0x8179a70
<SeqNext>) at execScan.c:143
#4 0x08179a69 in ExecSeqScan (node=0x92d3798) at nodeSeqscan.c:130
#5 0x08167888 in ExecProcNode (node=0x92d3798) at execProcnode.c:334
#6 0x08165b23 in ExecutorRun (queryDesc=0x92a3b50,
direction=ForwardScanDirection, count=0) at execMain.c:1248
#7 0x082024ab in ProcessQuery (plan=0x929c798, params=<value optimized
out>, dest=0x92be820,
completionTag=0xbfed0c8a "") at pquery.c:179
#8 0x082026ac in PortalRunMulti (portal=0x92ca8b0, isTopLevel=<value
optimized out>, dest=0x92be820,
altdest=0x92be820, completionTag=0xbfed0c8a "") at pquery.c:1242
#9 0x08202e2c in PortalRun (portal=0x92ca8b0, count=2147483647,
isTopLevel=-45 '�', dest=0x92be820,
altdest=0x92be820, completionTag=0xbfed0c8a "") at pquery.c:813
#10 0x081fdcf7 in exec_simple_query (
query_string=0x9272d18 "update corn0715 set
ncs_yield_factor=ncs_yield_factor/10000,
transitional_yield=transitional_yield/100, fsa_yield=fsa_yield/100,
aqpproved_yield=aqpproved_yield/100, prev_approved_yield=prev_approved_"...)
at postgres.c:986
#11 0x081ff44c in PostgresMain (argc=4, argv=0x921b590,
username=0x921b570 "bthoen") at postgres.c:3572
#12 0x081d253f in ServerLoop () at postmaster.c:3207
---Type <return> to continue, or q <return> to quit---
#13 0x081d3492 in PostmasterMain (argc=3, argv=0x9218378) at
postmaster.c:1029
#14 0x08188126 in main (argc=3, argv=0x9218378) at main.c:188
(gdb)

Here's the backstory. The process that's crashing is an SQL update
query on the table (corn0715). There are 104 fields in the table and the
raw record length is about 648 characters. I have just imported it as a
delimited file with COPY and there are about 1.7 million records. There
are no indexes yet, and even though the database has been "spatialized"
by PostGIS this particular table has no spatial component. The database
contains only this table and the two reference tables installed by PostGIS.

Also, this crash occurs not only when running the update query, but
also when trying to create an index or selecting data from all rows or
even when running pg_dump. It takes about 10 minuets to occur, which is
about the same time that it took to SELECT count(*) FROM corn0715;
(which did work, BTW). So I suspect it's an "end-of-table" problem.

Here's the query:
UPDATE corn0715 SET ncs_yield_factor=ncs_yield_factor/10000,
transitional_yield=transitional_yield/100, fsa_yield=fsa_yield/100,
aqpproved_yield=aqpproved_yield/100,
prev_approved_yield=prev_approved_yield/100,
annual_yield_1=annual_yield_1/100,
yield_acres_1=yield_acres_1/100,annual_yield_2=annual_yield_2/100,
yield_acres_2=yield_acres_2/100,annual_yield_3=annual_yield_3/100,
yield_acres_3=yield_acres_3/100, annual_yield_4=annual_yield_4/100,
yield_acres_4=yield_acres_4/100, annual_yield_5=annual_yield_5/100,
yield_acres_5=yield_acres_5/100, annual_yield_6=annual_yield_6/100,
yield_acres_6=yield_acres_6/100, annual_yield_7=annual_yield_7/100,
yield_acres_7=yield_acres_7/100, annual_yield_8=annual_yield_8/100,
yield_acres_8=yield_acres_8/100, annual_yield_9=annual_yield_9/100,
yield_acres_9=yield_acres_9/100, annual_yield_10=annual_yield_10/100,
yield_acres_10=yield_acres_10/100, rate_yield=rate_yield/100,
average_yield=average_yield/100,
perennial_transitional_yield_factor=perennial_transitional_yield_factor/100;

I compiled PostgreSQL 8.3.3, PostGIS 1.3.3, GEOS 3.0.0, Proj4 4.6.0 and
GDAL 1.5.2 from source using gcc 4.3.0 on a newly installed Fedora Core
9 system. All software is the latest version and the disc drive was
wiped clean before I started, so there are no old libraries or
configurations lurking in the background.

I did have some issues compiling GEOS, PostGIS and GDAL. Apparently,
there have been some changes in gcc, particularly in the include files,
and I was getting errors like 'memcpy' not declared, and the like. I
fixed that (I think) with a diff patch, and everything seemed to compile
and install OK.

I don't really understand how to interpret the results from GDB, but if
somebody who knows could take a look at this and give me some
suggestions, I'd appreciate it. If you need samples of the table, or the
SQL scripts I used to load it, I can provide those too. If you want to
see the entire table you'll need at least about 1.1 GB of disc space.

TIA,
- Bill Thoen


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [NOVICE] Server Programming Interface - spi.exec() overheds issue

On Wed, Jun 18, 2008 at 11:35 AM, João Gonçalves <joaofgo@gmail.com> wrote:
> Hi!
> I'm not quite sure if this is the right place to discuss this issue but
> here goes. I've been testing PL/R language coupled with SPI and postgis to
> produce Voronoi tiles, the following function pushes a set of polygon
> vertexes into an R array and inserts the Voronoi tiles into the database.
> Since my current working dataset has something like 1.5M vertexes the
> overheads built up are huge and the proccess fails due to insuficient
> memory. Also, I can only see the results until all data is proccessed.
>
> What is the best way to handle this? Can I flush/purge pg buffers to better
> handle memory issues?
> Should I adopt a block processing strategy to narrow down the initial
> dataset through a LIMIT statement or something along this line?
> Is spi.execute() in read-only mode usable / effective in this context?
> Are spi.freetuple or spi.freetuptable usable? How?
> Any ideas?
>
> CREATE OR REPLACE FUNCTION voronoi_tiles(TEXT, TEXT, INTEGER) RETURNS
> void AS '
> library(deldir)
>
> gids<-pg.spi.exec(sprintf("SELECT DISTINCT a.poly_gid AS gid FROM
> %1$s AS a ORDER BY gid;",arg1))
>
> for (i in 1:length(gids$gid)){
>
> # Retrieve points from the auxiliary geometry
> points <- pg.spi.exec(sprintf("SELECT st_x(a.the_geom) AS x,
> st_y(a.the_geom) AS y FROM %1$s AS a WHERE a.poly_gid = %2$i;", arg1,
> gids$gid[[i]]))
>
> # External envelope
> xmin<-min(points$x)-abs(min(points$x)-max(points$x))
> xmax<-max(points$x)+abs(min(points$x)-max(points$x))
> ymin<-min(points$y)-abs(min(points$y)-max(points$y))
> ymax<-max(points$y)+abs(min(points$y)-max(points$y))
>
> # Generate the voronoi object
> voro = deldir(points$x, points$y, digits=6, frac=1e-3,
> list(ndx=2,ndy=2), rw=c(xmin,xmax,ymin,ymax))
>
> # Get the individual tiles/polygons for the Voronoi diagram
> tiles = tile.list(voro)
>
> for(j in 1:length(tiles)){
>
> tile<-tiles[[j]]
> geom = "GeomFromText(''LINESTRING("
>
> for(k in 1:length(tile$x)){
> geom = sprintf("%s %.6f %.6f,", geom, tile$x[[k]],
> tile$y[[k]])
> }
>
> # Close the tile by appending the first vertex
> geom = sprintf("%s %.6f %.6f)'' , %i)", geom, tile$x[[1]],
> tile$y[[1]], arg3)
>
> # Insert into the database
> pg.spi.exec(sprintf("INSERT INTO %1$s (gid, the_geom) VALUES
> (%2$i, %3$s)", arg2, gids$gid[[i]], geom))
> }
> }
> ' LANGUAGE 'plr';
>
> Example:
> SELECT voronoi_tiles('test_set', 'output_test_table', <SRID>);
>
>
>
>
>
>
>
> --
> Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-novice
>

Try using a cursor. See pg.spi.cursor_open in the pl/r documentation.
It will allow you to issue the query once, but fetch and process
results a little at a time (like your LIMIT idea, but easier).

- Josh / eggyknap

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [GENERAL] postgres generates too much processes per minute

On Thu, Jun 19, 2008 at 8:17 AM, Bill Moran
<wmoran@collaborativefusion.com> wrote:
> In response to "coliban@gmail.com" <coliban@gmail.com>:
>
>> Hello,
>>
>> i have the problem that postgres ist starting and stopping several (up
>> to 4) processes per minute, so that the error log in windows is
>> running full, with more than 14 entries every minute.
>>
>> Does someone know, how to reduce the start and the end of so many
>> processes, is there a variable or something ?
>
> A process is started for every connection and ends when the connection
> is closed. If you're opening/closing connections frequently, you'll
> see this. It's by design.

You might consider using a connection pooler as well. The point of
such a thing is to allow applications requiring lots of connections to
share a smaller set of real connections to the database by proxying
connections. Look for pgBouncer and pgPool as example connection
pooling applications

- Josh / eggyknap

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [JDBC] Query size?

Check this documentation out. It does what you want, but in an efficient way.

http://doc.postgresintl.com/jdbc/ch04.html#jdbc-query-with-cursor

--Imad

On Sun, Jun 22, 2008 at 10:37 PM, Paul Tomblin <ptomblin@gmail.com> wrote:
> imad wrote:
>>
>> Why dont you adopt a pro-active aproach here and append a LIMIT to
>> every query you send to the server?
>> You can do an OFFSET LIMIT combination of all queries until you get
>> results less than LIMIT.
>
> What I'm asking though is if there is some way to know before I start
> returning results, short of doing a "COUNT(*)" first.
>
>
> --
> Paul Tomblin <ptomblin@xcski.com> http://blog.xcski.com/
> We're the technical experts. We were hired so that management could
> ignore our recommendations and tell us how to do our jobs.
> -- Mike Andrews
>

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [JDBC] Query size?

imad wrote:
> Why dont you adopt a pro-active aproach here and append a LIMIT to
> every query you send to the server?
> You can do an OFFSET LIMIT combination of all queries until you get
> results less than LIMIT.

What I'm asking though is if there is some way to know before I start
returning results, short of doing a "COUNT(*)" first.


--
Paul Tomblin <ptomblin@xcski.com> http://blog.xcski.com/
We're the technical experts. We were hired so that management could
ignore our recommendations and tell us how to do our jobs.
-- Mike Andrews

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [JDBC] Query size?

Why dont you adopt a pro-active aproach here and append a LIMIT to
every query you send to the server?
You can do an OFFSET LIMIT combination of all queries until you get
results less than LIMIT.


--Imad


On Sun, Jun 22, 2008 at 9:38 PM, Paul Tomblin <ptomblin@gmail.com> wrote:
> I'm working with another developer to produce an iPhone app - he's writing
> the app and I'm writing the server that provides the data to it. One of the
> queries he's doing might produce hundreds or thousands of results, and he
> doesn't want me sending that much data.
>
> So I'm wondering if there is a simple way to determine if a query is going
> to produce a result of more than N rows? Is the only way to do a "select
> count(*)" query first, or there something in the ResultSet that tells me how
> many results are queued up?
>
> --
> Paul Tomblin <ptomblin@xcski.com> http://blog.xcski.com/
> God is real, unless declared as an integer.
>
> --
> Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-jdbc
>

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

[JDBC] Query size?

I'm working with another developer to produce an iPhone app - he's writing the
app and I'm writing the server that provides the data to it. One of the
queries he's doing might produce hundreds or thousands of results, and he
doesn't want me sending that much data.

So I'm wondering if there is a simple way to determine if a query is going to
produce a result of more than N rows? Is the only way to do a "select
count(*)" query first, or there something in the ResultSet that tells me how
many results are queued up?

--
Paul Tomblin <ptomblin@xcski.com> http://blog.xcski.com/
God is real, unless declared as an integer.

--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: [pgsql-es-ayuda] sobre separador de punto flotante

gracias alejandro ya lo probaré.

2008/6/22 Alejandro D. Burne <alejandro.dburne@gmail.com>:
El día 21 de junio de 2008 20:43, Sandro Martínez <smf2502@gmail.com> escribió:
> Hola listeros:
>
> Tengo este problema y quería ver si alguien me puede dar alguna idea al
> respecto:
>
> Tengo en la aplicación cliente y puesto que en la configuración que utilice
> idioma español, eso implica que el separador de decimales sea la coma. La
> aplicación cliente utiliza ODBC y unas cosas en Visual Studio que se llaman
> Typed DataSet, a eso se le pasan los datos y el es capaz de insertar en la
> BD de postgres. Todo eso funciona bien hasta que se le pasa un punto
> flotante.  El postgres da error y dice que el numero 1,234234 no es un valor
> de doble precisión válido, al parecer el postgres solo utiliza para doubles
> el punto (.)
>
> En fin quiero saber como puedo configurar el postgres para que los acepte.
>
> No  me vale la solución de parsear y cambiar la coma por el punto pues al
> final eso se hace de forma interna por el Visual Studio.
>
> Saludos y gracias de antemano.

Probablemente tengas que tocar la configuración regional y de idioma
de tu windows (lo deduzco a partir de Visual Studio), - separador
decimal -.

Saludos, Alejandro

Re: [GENERAL] Dump and restore problem

On Sun, Jun 22, 2008 at 3:11 AM, Stuart Luppescu <slu@ccsr.uchicago.edu> wrote:
> On 木, 2008-06-19 at 11:57 +0200, David wrote:
>> > pg_restore: [tar archiver] could not open TOC file for input: No
>> such
>> > file or directory
>>
>> It sounds like the tar file is no longer being created.
>>
>> Try manually running the commands, and verify that the dump, restore,
>> and rsync still work correctly at each step.
>
> It looks like it's working:
> file postgresql_database-db-backup
> postgresql_database-db-backup: tar archive
>
> tar tfv postgresql_database-db-backup
> -rw------- 2048/1024 62764 2008-06-20 03:00 toc.dat
> -rw------- 2048/1024 4590795 2008-06-20 03:00 1765.dat
> -rw------- 2048/1024 391040 2008-06-20 03:00 1771.dat
> -rw------- 2048/1024 1262932 2008-06-20 03:00 1770.dat
> -rw------- 2048/1024 4472 2008-06-20 03:00 1769.dat
> -rw------- 2048/1024 5 2008-06-20 03:00 1796.dat
> -rw------- 2048/1024 5 2008-06-20 03:00 1787.dat
> -rw------- 2048/1024 5456 2008-06-20 03:00 1762.dat
> -rw------- 2048/1024 435981 2008-06-20 03:00 1767.dat
> -rw------- 2048/1024 54185 2008-06-20 03:00 1774.dat
> -rw------- 2048/1024 109 2008-06-20 03:00 1764.dat
>
> etc.
>
> This only weird thing is the ownership is weird. Where did that
> 2048/1024 come from?
>

I'm not sure about that. But if your manual process is working, then
your script should work fine too. Try adding more logging & error
checking/capturing to your mirroring scripts to check where the
problem is occurring. Debug lines like 'echo "TESTING ($LINENO) -
<Description>"' to trace which parts of the scripts get run when can
also help.

David.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general