Wednesday, August 6, 2008

Re: [BUGS] Oder by not working

Blanco, Jose wrote:
> Which as you can see is not really the desired behavior. I created a
> test table and loaded these values into a field of type text, and then
> issued the following query:
>
> select * from test order by 1;
> name
> --------
> Ta, A
> Tab, A
> Ta, Z
> (3 rows)

Unlike on some systems, in PostgreSQL "ORDER BY 1" means order by the
constant value "1", not the first column. Try "ORDER BY name".

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] bytea encode performance issues

I ran the update, but now (obviously) it wants to vacuum again and
vacuum on that table took 9 hours yesterday.

Do the statistics change when changing the storage type? Meaning does it
really need to vacuum?

Thank you
Sim

Tom Lane wrote:
> Sim Zacks <sim@compulab.co.il> writes:
>> After the alter table, then I have to update each row with an
>> update dbmail_messageblks set messageblk=messageblk;
>> so that it uses the new storage.
>
> I wouldn't actually bet on that changing anything at all ...
> I'd try something like messageblk = messageblk || '' to make
> completely sure the value gets detoasted.
>
> regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [SQL] How to creat tables using record ID in for loop

The function to treate tables is:

CREATE OR REPLACE FUNCTION cr_tbls_by_staid() RETURNS INTEGER AS
$$
DECLARE
stid INTEGER;
q TEXT;
BEGIN
FOR stid IN SELECT staid FROM mytest LOOP
q:= 'CREATE TABLE "s' || staid || '" (staid varchar(50), val real,
dt date);'
RAISE NOTICE 'query is: %', q; --for debug
EXECUTE q;
END LOOP;
RETURN 1;
END;
$$ LANUAGE plpgsql;

However, there are some advices with regards to your question. First,
as Rangar noted, you could fall into scalability issue when create a
couple of millions of tables. You probably should to examine data
design for your DB. For example, you could create the only table like
this:

CREATE TABLE sta_descs (staid varchar(50), val real, dt date) WITHOUT OIDS;

and store there all records you want. Such a table could be easily
joined with mytest by staid. Obviosly, sta_desct.staid have to be
indexed.

Second, take a look at SQL syntax to figure it out how the tables and
fields could be named:
http://www.postgresql.org/docs/8.3/interactive/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS

PS. I didn't test is function works properly as I have no access to
PgDB right now.

HTH
--
Best regards. Yuri.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [GENERAL] looking for psql without server-installation

On 2008-08-06 17:38, Christian.Strobl@dlr.de wrote:
> what is the best way to get a psql-client at a linux system without
> the need of a server.

On Fedora/RedHat/CentOS or other yum based distribution:
# yum install postgresql

For Debian/Ubuntu or other dpkg based distribution I think it will be:
# dpkg --install postgresql-client
But I do not use Debian, so I can be wrong.

It does not have to be the same version as the server, so don't compile
it yourself.

Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
Winnie the Pooh

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[PERFORM] Unexpectedly Long DELETE Wait

Hi,

Below command has been running since ~700 minutes in one of our
PostgreSQL servers.

DELETE FROM mugpsreglog
WHERE NOT EXISTS (SELECT 1
FROM mueventlog
WHERE mueventlog.eventlogid = mugpsreglog.eventlogid);

Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6)
Filter: (NOT (subplan))
SubPlan
-> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0)
Filter: (eventlogid = $0)

Here is some information about related tables:

# SELECT pg_relation_size('emove.mueventlog') / pow(1024, 2);
?column?
----------
11440
(1 row)

# SELECT pg_relation_size('emove.mugpsreglog') / pow(1024, 2);
?column?
-------------
631.8046875
(1 row)

And there isn't any constraints (FK/PK), triggers, indexes, etc. on any
of the tables. (We're in the phase of a migration, many DELETE commands
similar to above gets executed to relax constraints will be introduced.)

Here are related postgresql.conf lines:

shared_buffers = 512MB
max_prepared_transactions = 0
work_mem = 8MB
maintenance_work_mem = 512MB
max_fsm_pages = 204800
max_fsm_relations = 8192
vacuum_cost_delay = 10
wal_buffers = 2MB
checkpoint_segments = 128
checkpoint_timeout = 1h
checkpoint_completion_target = 0.5
checkpoint_warning = 1min
effective_cache_size = 5GB
autovacuum = off

And system hardware & software profile is:

OS : Red Hat Enterprise Linux ES release 4 (Nahant Update 5)
PostgreSQL: 8.3.1
Filesystem: GFS (IBM DS4700 SAN)
CPU : 4 x Quad Core Intel(R) Xeon(TM) CPU 3.00GHz
Memory : 8GB

Does anybody have an idea what might be causing the problem? Any
suggestions to improve the performance during such bulk DELETEs?


Regards.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] bytea encode performance issues

> I don't quite follow that...the whole point of utf8 encoded database
> is so that you can use text functions and operators without the bytea
> treatment. As long as your client encoding is set up properly (so
> that data coming in and out is computed to utf8), then you should be
> ok. Dropping to ascii is usually not the solution. Your data
> inputting application should set the client encoding properly and
> coerce data into the unicode text type...it's really the only
> solution.
>
Email does not always follow a specific character set. I have tried
converting the data that comes in to utf-8 and it does not always work.
We receive Hebrew emails which come in mostly 2 flavors, UTF-8 and
windows-1255. Unfortunately, they are not compatible with one another.
SQL-ASCII and ASCII are different as someone on the list pointed out to
me. According to the documentation, SQL-ASCII makes no assumption about
encoding, so you can throw in any encoding you want.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] compiling Xpath functions in PostgreSQL 8.3.3

Hi,

I was trying to install postgreSQL with XML data type support. My OS is
ubuntu , kernel 2.6.15.. PostgresSQL version is 8.3.3 .I have libxml2 and
libxslt installed. I configured using
./configure --with-libxml --with-libxslt
configure went fine..
'make all' went fine
'make install' went fine.

when i check the functions available inside PostgreSQL using '\df x*' , I
can only see xpath() function that is related to xpath. I was able to use
xpath() function properly. However,No other functions like xpath_bool() ,are
available.

When i run ' \i /usr/local/pgsql/share/pgxml.sql' in postgre , I get error
saying file or directory named 'MODULE_PATHNAME' doesn't exist. How to get
this MODULE_PATHNAME exist?.

what is the easiest way to get the XML functioanlity (mainly xpath part like
xpath_bool etc..) in PostgreSQL in Linux. I mean which linux flavours, or
postgreSQl versions are recommendable. I couldn't find the documentation
online for getting XML working in postgreSQL (except that configure with
libxml ).

THanks in Advance
SagAr.
--
View this message in context: http://www.nabble.com/compiling-Xpath-functions-in-PostgreSQL-8.3.3-tp18858857p18858857.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] problem using a xpath function

hello everyone
I'm working on PostgreSQL XML Extension.
My system is windows xp professional sp2 and
PostgreSQL 8.3.3 and I am sure installed xml2 module.

I have a 2 colums table:

(frase di creazione tabella : create table ecc ecc)

I have a problem using a xpath function.
My query is:

SELECT id_autori , xpath ('/Authority/Nome', testo)
FROM autori
WHERE xpath_bool('/Authority[Nome="ABELARDO"]', testo) ;

I get this error:
ERROR : the function xpath_nodeset(unknown , xml) do not exsist at character
69
HINT: no function matches the given name and argument types. You might need
to add explicit type casts


Any Idea ?
thanks
--
View this message in context: http://www.nabble.com/problem-using-a-xpath-function-tp18853909p18853909.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [NOVICE] COPY TO Question?

On Aug 6, 2008, at 7:59 AM, Marc Abbott wrote:

> I am currently running PostgreSQL 8.1.3 on i686-pc-linux-gnu,
> compiled by GCC gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-47).
> What I need to do is COPY FROM a file into a temp table (This I know
> how to do), then select and manipulate the data and write it back
> out to groups of files. I can insert the manipulated data into
> another table and then use COPY TO a file but what I would like to
> do is open a file with a name and write pipe delimited lines to it
> and close it and reopen the next file, write and so on ….. pretty
> similar to Oracle's utl_file.fopen, utl_file.fclose,
> utl_file.put_line etc. Is this available in postgres?

I'm not sure if this is implemented in 8.1, but recent versions of
PostgreSQL support COPY for general SQL expressions so you don't need
to create more tables.

For example:

COPY (select * from my_table) TO 'path/to/file';

I don't think there are any other file manipulation capabilities in
PostgreSQL without using an untrusted procedural language.


>
> Secondly, is it possible to FTP the same files to a different
> server? Would I need to write some form of shell script and execute
> it? If so how would I go about doing this from the DB?
>

You would need to use one of the untrusted procedural languages in
order to do this directly from the database.

John DeSoi, Ph.D.

--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [pgadmin-support] Bug/Feature Request: pgAdmin should show "context" part of error message for problems with COPY command

On Tue, Aug 5, 2008 at 2:08 PM, kaeptn haddock <kaeptn.haddock@gmail.com> wrote:
> When issueing a COPY command like this one
>
> --------------------
> COPY tablename FROM '/var/www/hdberat.tab' WITH NULL AS '';
> --------------------
>
>
> The error-message in pgadminIII is quite short
>
> ********** Error ***********************************************
> FEHLER: fehlende Daten für Spalte »termin_dauer_eingabe«
> SQL state: 22P04
> -----------------------------------------------------------------
>
> (For those who don't understand German: it says: missing data for column
> xy). In PHP-pgadmin the errormessage is much more meaningful, because it
> adds the CONTEXT-line, where it actually tells me the number of the line
> ("Zeile") in the source-file, where the error occurred:
>
>
> ****************************************************************
> FEHLER: fehlende Daten für Spalte »termin_dauer_eingabe«
> CONTEXT: COPY fm_hdberat, Zeile 10421: »admin1 Gründler Pius archiviert "Texteingabe" "Texteingabe" "Texteingabe" "Texteingabe" "Textein...«
> -------------------------------------------------------------------
>
> It'd be a big help, if pgadmin was as verbose as php-pgadmin.

pgAdmin does include the context line, if the server gives us one:

-- Executing query:

select foo();

ERROR: division by zero
CONTEXT: PL/pgSQL function "foo" line 4 at assignment

********** Error **********

ERROR: division by zero
SQL state: 22012
Context: PL/pgSQL function "foo" line 4 at assignment


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [HACKERS] plan invalidation vs stored procedures

2008/8/6 Hannu Krosing <hannu@krosing.net>:
> On Wed, 2008-08-06 at 12:13 +0200, Pavel Stehule wrote:
>> 2008/8/6 Hannu Krosing <hannu@krosing.net>:
>> > On Tue, 2008-08-05 at 16:17 +0200, Pavel Stehule wrote:
> ..
>> >> you cannot change header of function. It's same as change C header of
>> >> function without complete recompilation.
>> >
>> > SQL is not C.
>> >
>> > You don't have to recompile the whole SQL database when you add columns
>> > to tables, so why should you need to do it, when you add a column to
>> > table-returning function ?
>> >
>>
>> I thing, it's possible - but it's neccessary completly new dictionary
>> with dependencies (some dependencies are dynamic - polymorphic
>> functions) so it's dificult task.
>
> I think that you can safely err on the side of caution, that is, save
> more dependendcies than actually affected.
>

> Or you even add dependencies from inside the pl, either at compile/check
> or run time (cached of course), so that you hit the exact right function
> oid and can reuse the function lookup already done.
>
actually functions doesn't see into SQL statements - but I though is
could be hook on new item in plan cache, so there can be some
registration that try to analyze all called functions from plan and
add some info to some buffer. There is lot of some. Some have to write
it :)

Pavel


> -----------------
> Hannu
>
>
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] bytea encode performance issues

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tom Lane wrote:

> We've seen complaints about toast fetch time before. I don't think
> there's any really simple solution. You could experiment with disabling
> compression (SET STORAGE external) but I'd bet on that being a net loss
> unless the data is only poorly compressible.

I am trying it with External and then I'll try it with Plain.
However, this is a hugely long process.
After the alter table, then I have to update each row with an
update dbmail_messageblks set messageblk=messageblk;
so that it uses the new storage.
After that I have to vacuum analyze (which is taking over an hour so
far, most of it on toast).
After this is complete, I'll test the examples again and go through the
process using Plain.

> If the table is not updated very often, it's possible that doing a
> CLUSTER every so often would help. I'm not 100% sure but I think that
> would result in the toast table being rewritten in the same order as the
> newly-built main table, which ought to cut down on the cost of fetching.

This is an email table, it is never updated, but constant inserts.
Deletes only happen once a week. I'll try the cluster after I try the
storage changes.


> Also, what database encoding are you using? I note from the CVS logs
> that some post-8.2 work was done to make LIKE faster in multibyte
> encodings. (Though if you were doing the LIKE directly in bytea, that
> wouldn't matter ... what was the reason for the encode() call again?)
>

We are using UTF-8, and I am testing SQL-ASCII at the moment. DBMail is
a pre-built application, so until I am ready to start playing with its
internals I don't really have a choice about a number of its features.
The reason for the bytea is because of the multiple encodings, I have
suggested using SQL-ASCII to them and then it will be possible to use a
text datatype.
I don't know the reason for using the encode, I assumed that it was
because bytea wouldn't take a LIKE, but I see that I was mistaken. It
could be that in an earlier release LIKE was not supported against
bytea, but I don't know that for sure.


Thanks for your help
Sim
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkiZpDoACgkQjDX6szCBa+pJVACfbkAQuvsOqCCFdlMzpC1rx5yp
KpAAoIV17U+gKjXcDYhlOjRIE1PHUbaK
=A+Ru
-----END PGP SIGNATURE-----

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Status of DISTINCT-by-hashing work

Tom Lane wrote:
> I've pretty much finished the project I got a bee in my bonnet about
> last week, which is to teach SELECT DISTINCT how to (optionally) use
> hashing for grouping in the same way that GROUP BY has been able to do
> for awhile.
>
> There are still two places in the system that hard-wire the use of
> sorting for duplicate elimination:
>
> * Set operations (UNION/INTERSECT/EXCEPT)
>
> * Aggregate functions with DISTINCT
>
> I'm thinking of trying to fix set operations before I leave this topic,
> but I'm not sure it's worth the trouble to change DISTINCT aggregates.
> They'd be a lot more work (since there's no executor infrastructure
> in place that could be used) and the return on investment seems low.
>
> Comments?
>
> regards, tom lane
>
>

i feel it exactly the same way.
DISTINCT has been a place people wanted to see fixed for a while but set
operations are nothing I would really worry about.
what we have now is absolutely fine.

given the list of more important issues, i'd vote for something else.

best regards,

hans

--
Cybertec Schönig & Schönig GmbH
PostgreSQL Solutions and Support
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Tel: +43/1/205 10 35 / 340
www.postgresql-support.de, www.postgresql-support.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] Re: [HACKERS] pg_restore -d cipa /cipa/RAJASTHAN/RAJASTHAN/CIPABACKUP01_08_2008.TAR pg_restore: [archiver] out of memory

Amit jain wrote:
> Dear All,
>
> I am having a backup file in tar format. While restoring it through
> pg_restore instantly i am getting an following error.
> *Command - pg_restore -d cipa
> /cipa/RAJASTHAN/RAJASTHAN/CIPABACKUP01_08_2008.TAR
> Error - pg_restore: [archiver] out of memory
> *I have increased shmmax , maintenace work memory but still i am
> getting the same error.
>
> 2. I have make all log_parameter to debug5 but still getting a single
> error -line while restoring not getting any log or debug info about it.
>
> Kindly help me out from this absurd behaviour of postgreSQL
> restoration process.
> Any help would be highly appreciated. Thanx in advance
>
>

Please do not cross-post to multiple lists like this. (Also, adding
addresses of individual hackers is not good netiquette). The -hackers
list is quite inappropriate for usage questions.

It looks like the problem you have is that the client is running out of
memory, so changing the server's memory settings seems unlikely to have
any beneficial effect. Maybe you need to look at your ulimit settings.

cheers

andrew

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[GENERAL] Invocation overhead for procedural languages

Hi all, I think I've read somewhere in the documentation that the
invocation of functions written in procedural languages (with the
exception of plpgsql) incur in performance hit due to the call the
language interpreter. Is that correct or am I completely off track?

Thank you in advance
--
Giorgio Valoti
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [SQL] Case Insensitive searches

On Wednesday 06 August 2008 07:59, Rafael Domiciano wrote:
> I have read the article... tnks, very helpful.
>
> But, can I create a index using function like "substring"? I would like to
> create something like this:

Actually, Richard Broersma is the one who commented on that approach. I have
never done this but I have read about it. I'm sure it can be done.

>
> CREATE INDEX indtest_01 ON table_01
> ((SUBSTRING(month_year, 3, 4) || SUBSTRING(month_year, 1, 2))
>
> 2008/8/4 Terry Lee Tucker <terry@chosen-ones.org>
>
> > On Monday 04 August 2008 11:09, Frank Bax wrote:
> > > Terry Lee Tucker wrote:
> > > > On Monday 04 August 2008 10:05, Richard Broersma wrote:
> > > >> On Mon, Aug 4, 2008 at 6:54 AM, Mike Gould <mgould@allcoast.net>
> >
> > wrote:
> > > >>> In some db's if you
> > > >>> use a lower() or upr() it will always do a table scan instead of
> >
> > using
> >
> > > >>> a index
> > > >>
> > > >> True, this would also happen in PostgreSQL. However, you can
> > > >> overcome this by creating a "functional" index:
> >
> > http://www.postgresql.org/docs/8.3/interactive/indexes-expressional.html
> >
> > > >> This way all expression using where lower( column ) = 'a'. will
> > > >> always use an index scan.
> > > >
> > > > What about using the operator, ~* ?
> > > >
> > > > Does that cause a table scan as well?
> > >
> > > Whether or not any query uses an index scan or seq scan depends on many
> > > factors and is not always easily predictable.
> > >
> > > Richard's statement about "will always use an index scan" is not
> > > universally true. If the table is very small; a index scan is NOT
> > > used. Table statistics could also indicate a seq scan is more efficient
> > > (suppose 99% of rows had column='a').
> > >
> > > The ~* operator is very likely to scan the entire table because it will
> > > look for 'A' anywhere in the column (and will therefore match 'Joanne';
> > > and I doubt that there is special code to handle case where length of
> > > argument is exactly the same as column. However; ~* '^a' which anchors
> > > search to first character is perhaps more likely to use an index scan.
> > >
> > > Frank
> >
> > Frank,
> >
> > Thanks for the response. Actually, from within the applicaion, we use ~*
> > and
> > it is anchored with whatever they've typed in the widget as search
> > criteria.
> >
> > Anyway, thanks for the helpful response...
> > --
> > Terry Lee Tucker
> > Turbo's IT Manager
> > Turbo, division of Ozburn-Hessey Logistics
> > 2251 Jesse Jewell Pkwy NE
> > Gainesville, GA 30501
> > Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987
> > terry@turbocorp.com
> > www.turbocorp.com
> >
> > --
> > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-sql

--
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987
terry@turbocorp.com
www.turbocorp.com

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

[HACKERS] pg_restore -d cipa /cipa/RAJASTHAN/RAJASTHAN/CIPABACKUP01_08_2008.TAR pg_restore: [archiver] out of memory

Dear All,

 I am having a backup file in tar format. While restoring it through pg_restore instantly i am getting an following error.
Command - pg_restore -d cipa /cipa/RAJASTHAN/RAJASTHAN/CIPABACKUP01_08_2008.TAR
Error  - pg_restore: [archiver] out of memory
I have increased shmmax , maintenace work memory but still i am getting the same error.

2. I have make all log_parameter to debug5 but still getting a single error -line while restoring not getting any log or debug info about it.

Kindly help me out from this absurd behaviour of postgreSQL restoration process.
Any help would be highly appreciated. Thanx in advance

Thanks n Regards
Amit jain
9818450022

Re: [PERFORM] pg_dump error - out of memory, Failed on request of size 536870912

Marcin Citowicki wrote:
> Hello,
>
> I forgot to add - all those 'out of memory' errors happen when backup db
> is trying to create index. Every 'CREATE INDEX' operation is followed by
> 'out of memory' error.

are you sure that your OS (or ulimit) is able to support a
maintenance_work_setting that large ? - try reducing to a say 128MB for
a start and try again.


Stefan

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] pg_dump error - out of memory, Failed on request of size 536870912

Hello

you have some data files broken.

try to use http://svana.org/kleptog/pgsql/pgfsck.html - but DO
DATABASE CLUSTER FILES BACKUP BEFORE. pgfsck doesn't support 8.3, but
it can help you to search currupted rows. Then you have remove these
rows, thats all. Later check your hardware - probably your server has
problems with memory or controller.

regards
Pavel Stehule


2008/8/6 Marcin Citowicki <marcin.citowicki@m4n.nl>:
> Hello,
>
> I forgot to add - all those 'out of memory' errors happen when backup db is
> trying to create index. Every 'CREATE INDEX' operation is followed by 'out
> of memory' error.
> Thanks!
>
> Marcin
>
>
> Marcin Citowicki wrote:
>>
>> Hello,
>>
>> We have a bit of a problem with the daily db backup. At the moment the
>> backup db size is half of the main db, and we have the following errors in
>> the log file:
>>
>> ERROR: out of memory
>> DETAIL: Failed on request of size 536870912
>>
>> (There's a bunch of those.)
>>
>> There were some 'permission denied for language c' errors earlier in the
>> log, but then the backup continued for a while.
>>
>> This is the command we're running:
>>
>> pg_dump -c -h $DB_HOST -U $DB_USER $DB_NAME | PGOPTIONS='-c
>> maintenance_work_mem=1500MB -c sort_mem=64MB'
>> psql --quiet -U $DB_DUMP_USER -h $DB_DUMP_HOST $DB_DUMP_NAME;
>>
>> Both databases (main and backup) are Postgres 8.3.
>>
>> I'd appreciate any suggestions on how to fix this problem. Also, if you
>> have a better idea on running a daily backup please let me know.
>> Thanks!
>>
>> Marcin
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
>

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] plan invalidation vs stored procedures

On Wed, 2008-08-06 at 12:13 +0200, Pavel Stehule wrote:
> 2008/8/6 Hannu Krosing <hannu@krosing.net>:
> > On Tue, 2008-08-05 at 16:17 +0200, Pavel Stehule wrote:
..
> >> you cannot change header of function. It's same as change C header of
> >> function without complete recompilation.
> >
> > SQL is not C.
> >
> > You don't have to recompile the whole SQL database when you add columns
> > to tables, so why should you need to do it, when you add a column to
> > table-returning function ?
> >
>
> I thing, it's possible - but it's neccessary completly new dictionary
> with dependencies (some dependencies are dynamic - polymorphic
> functions) so it's dificult task.

I think that you can safely err on the side of caution, that is, save
more dependendcies than actually affected.

Or you even add dependencies from inside the pl, either at compile/check
or run time (cached of course), so that you hit the exact right function
oid and can reuse the function lookup already done.

-----------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [NOVICE] How to copy a schema under another name in same database

Just to close the loop on my initial request, in the end we decided to go
with the approach of renaming the schema under the new name, dump it, rename
the schema back to its original name and do a restore of the dump, which will
recreate the schema under the new name.

Thanks to all for your suggestions, I will definitely keep the proposed
solution below in my notes!

---------------------------------
Daniel Gour
Adacel inc.

-----Original Message-----
From: Obe, Regina [mailto:robe.dnd@cityofboston.gov]
Sent: Wednesday, July 30, 2008 9:25 AM
To: Daniel Gour
Cc: pgsql-novice@postgresql.org
Subject: RE: [NOVICE] How to copy a schema under another name in same
database

Daniel,

I had a typo in the below that I just noticed. Should be I think - but
I'm sure I screwed up in other ways, but hopefully you get the idea.


FOR tbltocopy IN(SELECT table_name
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND
table_schema = param_source_schema ORDER BY table_name)

EXECUTE('CREATE TABLE ' || param_dest_schema || '.' || tbltocopy
|| '( LIKE ' || param_source_schema || '.' || tbltocopy || ' INCLUDING
DEFAULT CONSTRAINTS INDEXES );
INSERT INTO ' || param_dest_schema || '.' || tbltocopy || ' SELECT *
FROM ' param_source_schema || '.' || tbltocopy || ';');

If you need to copy views as well, that's a bit trickier since I assume
you would want your views to reference the specific schema objects
within the destination schema not the source schema.

For that take a look at information_schema.views -- it has a field
called view_definition.

You could use that to build the view and then macro replace the
references to the schema with the new schema I guess.

Something like
DECLARE vwdef information_schema.views --this part is iffy not sure if
a type is created for information_schema stuff

FOR vwdef IN(SELECT *
FROM information_schema.views
WHERE
table_schema = param_source_schema ORDER BY table_name)

EXECUTE('CREATE VIEW ' || param_dest_schema || '.' ||
vwdef.table_name
|| '( AS ' || REPLACE(vwdef.view_definition, param_source_schema || '.',
param_dest_schema || '.')
|| ';');

Hope that helps,
Regina

-----Original Message-----
From: Daniel Gour [mailto:Daniel.Gour@adacel.com]
Sent: Wednesday, July 30, 2008 9:12 AM
To: Obe, Regina
Cc: pgsql-novice@postgresql.org
Subject: RE: [NOVICE] How to copy a schema under another name in same
database

Thanks for the information! That looks promising, I will attempt to
implement it this week and let you know.

---------------------------------
Daniel Gour
Adacel Inc.

-----Original Message-----
From: Obe, Regina [mailto:robe.dnd@cityofboston.gov]
Sent: Wednesday, July 30, 2008 7:14 AM
To: Daniel Gour; pgsql-novice@postgresql.org
Subject: RE: [NOVICE] How to copy a schema under another name in same
database

Never had a need for this. One thought that comes to mind write a
plpgsql function that takes in name of new schema and old schema and
does something like below

--Create new tables in new schema

FOR tbltocopy IN(SELECT table_name
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND
table_schema = param_source_schema ORDER BY table_name)

EXECUTE('CREATE TABLE ' || param_dest_schema || '.' || tbltocopy
|| '( LIKE ' || param_source_schema || '.' || tbltocopy || ' INCLUDING
DEFAULT CONSTRAINTS INDEXES ');
INSERT INTO ' || param_dest_schema || '.' || tbltocopy || ' SELECT *
FROM ' param_source_schema || '.' || tbltocopy || ';');

NEXT;


Hope that helps,
Regina

-----Original Message-----
From: pgsql-novice-owner@postgresql.org on behalf of Daniel Gour
Sent: Tue 7/29/2008 9:20 AM
To: pgsql-novice@postgresql.org
Subject: [NOVICE] How to copy a schema under another name in same
database

Hello! I have a PostgreSQL (8.3.3, WinXP + linux) database with
multiple
schemas. I would like, on a regular basis, to be able to copy the
structure
and data of one schema under a new schema, using of course a different
name.
What would be the easiest way?

- I have seen the TODO entry "Add CREATE SCHEMA ... LIKE that
copies
a schema", so I guess an easy solution (a single pgsql command) is not
yet
available...

- I guess the alternative is to use pg_dump to dump a single
schema
and use pg_restore, but how can I restore the dumped information under
another schema? I would like to avoid dumping it as an sql script and
having
it to modify it manually, because this will be a regular operation that
I
would like to automate in my application.

Thanks in advance for any insight you can provide!

---------------------------------
Daniel Gour
Adacel Inc.


-----------------------------------------
The substance of this message, including any attachments, may be
confidential, legally privileged and/or exempt from disclosure
pursuant to Massachusetts law. It is intended
solely for the addressee. If you received this in error, please
contact the sender and delete the material from any computer.


--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [BUGS] BUG #4339: The postgreSQL service stops abnormally

Hi,

Bhaskar Sirohi wrote:
> Also there is no OOM killer application on the windows 2003 server.

I also didn't think so, but who commanded the server to shutdown, then?

> Can there be any another reason why the service failed ?

Well, yes, sure. But if you don't answer the important questions, it's
hard to figure out what's wrong.

Regards

Markus Wanner


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[NOVICE] COPY TO Question?

This email is subject to a disclaimer which may be found on our website by clicking on this link: disclaimer. If you would prefer, the disclaimer can be emailed to you by clicking here.

 

 

Hi

 

I am currently running PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-47). What I need to do is COPY FROM a file into a temp table (This I know how to do), then select and manipulate the data and write it back out to groups of files. I can insert the manipulated data into another table and then use COPY TO a file but what I would like to do is open a file with a name and write pipe delimited lines to it and close it and reopen the next file, write and so on ….. pretty similar to Oracle’s utl_file.fopen, utl_file.fclose, utl_file.put_line etc. Is this available in postgres?

 

Secondly, is it possible to FTP the same files to a different server? Would I need to write some form of shell script and execute it? If so how would I go about doing this from the DB?

 

Your help is appreciated.

 

Regards

Marc

Re: [SQL] Case Insensitive searches

I have read the article... tnks, very helpful.

But, can I create a index using function like "substring"? I would like to create something like this:

CREATE INDEX indtest_01 ON table_01
((SUBSTRING(month_year, 3, 4) || SUBSTRING(month_year, 1, 2))

2008/8/4 Terry Lee Tucker <terry@chosen-ones.org>
On Monday 04 August 2008 11:09, Frank Bax wrote:
> Terry Lee Tucker wrote:
> > On Monday 04 August 2008 10:05, Richard Broersma wrote:
> >> On Mon, Aug 4, 2008 at 6:54 AM, Mike Gould <mgould@allcoast.net> wrote:
> >>> In some db's if you
> >>> use a lower() or upr() it will always do a table scan instead of using
> >>> a index
> >>
> >> True, this would also happen in PostgreSQL.  However, you can overcome
> >> this by creating a "functional" index:
> >>
> >> http://www.postgresql.org/docs/8.3/interactive/indexes-expressional.html
> >>
> >> This way all expression using where lower( column ) = 'a'. will always
> >> use an index scan.
> >
> > What about using the operator, ~*  ?
> >
> > Does that cause a table scan as well?
>
> Whether or not any query uses an index scan or seq scan depends on many
> factors and is not always easily predictable.
>
> Richard's statement about "will always use an index scan" is not
> universally true.  If the table is very small; a index scan is NOT used.
>   Table statistics could also indicate a seq scan is more efficient
> (suppose 99% of rows had column='a').
>
> The ~* operator is very likely to scan the entire table because it will
> look for 'A' anywhere in the column (and will therefore match 'Joanne';
> and I doubt that there is special code to handle case where length of
> argument is exactly the same as column.  However; ~* '^a' which anchors
> search to first character is perhaps more likely to use an index scan.
>
> Frank

Frank,

Thanks for the response. Actually, from within the applicaion, we use ~* and
it is anchored with whatever they've typed in the widget as search criteria.

Anyway, thanks for the helpful response...
--
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812  Fax: (336) 372-6812  Cell: (336) 404-6987
terry@turbocorp.com
www.turbocorp.com

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [BUGS] BUG #4339: The postgreSQL service stops abnormally

Hello Markus,

Thanks for the reply.

For the past one week postgreSQL service has been running absolutely fine  -:)

Also there is no OOM killer application on the windows 2003 server. I haven't made any changes on the machine since first the service failed.

Nothing in the event logs also.

Can there be any another reason why the service failed ?

 --
Thanks and Regards

Bhaskar Sirohi
bhaskar.sirohi@druvaa.com
www.druvaa.com
(C) +91 986 094 6619


On Tue, Aug 5, 2008 at 5:50 PM, Markus Wanner <markus@bluegap.ch> wrote:
Hi,

Bhaskar Sirohi wrote:
> ...

2008-07-30 15:05:01 EDT LOG:  checkpoints are occurring too frequently (28
seconds apart)
2008-07-30 15:05:01 EDT HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2008-07-30 15:13:34 EDT LOG:  checkpoints are occurring too frequently (29
seconds apart)
2008-07-30 15:13:34 EDT HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2008-07-30 15:18:50 EDT LOG:  checkpoints are occurring too frequently (28
seconds apart)
2008-07-30 15:18:50 EDT HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2008-07-30 15:19:21 EDT LOG:  received fast shutdown request

These log lines look like your database is touching lots of tuples, requiring it to checkpoint frequently. Then it receives a fast shutdown request - from whoever.

What causes the workload? Did you check memory usage? (And uh.. does Windows 2003 Server have an OOM Killer or some such?)

Regards

Markus Wanner




[pgsql-de-allgemein] Re: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

Hi,

Bernd Helmle wrote:
> Rowlocks in PostgreSQL werden über die XID realisiert und direkt im
> Tuple Header gespeichert. Das alles ist sehr leichtfüßig, auch im
> SERIALIZABLE Mode funktioniert das genauso, nur das er halt in dem
> Moment net sperrt sondern einen Transaktionsfehler wirft, sollte XMAX
> bereits anderweitig modifiziert worden sein.

Genau.

> Die Unterschiede betreffen
> hier hauptsächlich die Art und Weise wie Snapshots erstellt werden,
> PostgreSQL ist hier sehr intelligent implementiert, aber tatsächliche
> Performanceunterschiede zw. READ COMMITTED und SERIALIZABLE ergeben sich
> wohl in der Art und Weise wie die Anwendung mit beiden umzugehen hat.

Richtig.

Die READ COMMITTED Anwendung sollte(!) aber haeufiger SELECT FOR UPDATE
nutzen, um korrekt zu locken. Dies modifiziert ebenfalls den tuple
header und setzt xmax schon mal auf die aktuelle xid, um das Tuple zu
sperren. Dies wird als pessimistic locking bezeichnet, weil dies die
Erwartung eines Konflikts impliziert.

Wenn aber kein Konflikt vorliegt dann stellt das ein unnoetiger Schritt
dar, den Du Dir bei SERIALIZABLE meist sparen kannst. Man beachte, dass
dieser Schritt ggf. sogar zu Diskzugriffen fuehrt, die in dem Fall
eigentlich ebenso unnoetig sind.

> Rein von der Datenbankengine gesehen müsste SERIALIZABLE sogar schneller
> sein, da es weniger Snapshots erzeugen muß!

Verglichen mit READ COMMITTED ist SERIALIZABLE schneller, wenn wenige
Konflikte zu erwarten sind - weniger wegen der Snapshots, sondern viel
eher durch optimistisches Locking. Je haeufiger Konflikte auftreten,
desto eher wird READ COMMITTED auf- oder sogar ueberholen.

Gruesse

Markus Wanner


--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [HACKERS] unable to build libpq on Win 2003 (32 bit)

Hi Hiroshi,

Thanks for your reply. Sorry I may have missed something but I'm using
Visual Studio 2005.
Anyway I'm sending you the dir Release output:

C:\src\PostgreSQL\postgresql-8.3.0\src\interfaces\libpq\Release>dir
Volume in drive C has no label.
Volume Serial Number is 40C0-3D67

Directory of
C:\src\PostgreSQL\postgresql-8.3.0\src\interfaces\libpq\Release

08/06/2008 01:47 PM <DIR> .
08/06/2008 01:47 PM <DIR> ..
08/05/2008 07:14 PM 776 libpq.res
1 File(s) 776 bytes
2 Dir(s) 16,525,180,928 bytes free

As you can see only one file is here - libpq.res.
Any suggestions to get this fixed?

I'm looking forward to your reply.

Many thanks,
Nikolay.

-----Original Message-----
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Hiroshi Saito
Sent: Wednesday, August 06, 2008 3:07 AM
To: Nikolae Shevchenco (md); pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] unable to build libpq on Win 2003 (32 bit)

Hi.

Sorry late reaction..
VC++2008 are official and are not supported. However, it has Build(ed).
Then, I did not reproduce a problem.
http://winpg.jp/~saito/pg_work/pg8.3.3_nmake_VC++2008.txt
It seems that there is some version difference.

Please show "dir Release."

Regards,
Hiroshi Saito

----- Original Message -----
From: Nikolae Shevchenco (md)
To: pgsql-hackers@postgresql.org
Sent: Wednesday, August 06, 2008 2:33 AM
Subject: [HACKERS] unable to build libpq on Win 2003 (32 bit)


Hello everybody,

I am trying to build libpq.dll from the source on a WIN 2003 system,
the make file is
attached. I am using Microsoft Visual Studio 8 and below is the command
and outcome I'm
trying to perform:

C:\src\PostgreSQL\postgresql-8.3.0\src\interfaces\libpq>nmake /f
win32.mak /I

Microsoft (R) Program Maintenance Utility Version 8.00.50727.42
Copyright (C) Microsoft Corporation. All rights reserved.

Building the Win32 static library...

Using default OpenSSL Include directory: C:\OpenSSL\include
Using default OpenSSL Library directory: C:\OpenSSL\lib\VC
Using default Kerberos Include directory: C:\kfw-2.6.5\inc
Using default Kerberos Library directory: C:\kfw-2.6.5\lib\i386
link.exe -lib @C:\DOCUME~1\MD_NSH~1\LOCALS~1\Temp\nm10D.tmp
link.exe @C:\DOCUME~1\MD_NSH~1\LOCALS~1\Temp\nm10E.tmp
mt -manifest .\Release\libpq.dll.manifest
-outputresource:.\Release\libpq.dll;2
Microsoft (R) Manifest Tool version 5.2.3790.2014
Copyright (c) Microsoft Corporation 2005.
All rights reserved.

mt.exe : general error c10100b1: Failed to load file
".\Release\libpq.dll". The system
cannot find the file specified.

What do you I need to change to make the build process successful?

I'm looking forward to your reply.
Thanks in advance.

Nikolay Shevchenko.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] pgbouncer - pgbouncer: asynctest: remove note about splice() experiments

Log Message:
-----------
asynctest: remove note about splice() experiments

Modified Files:
--------------
pgbouncer/test:
asynctest.c (r1.12 -> r1.13)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/test/asynctest.c.diff?r1=1.12&r2=1.13)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [PERFORM] pg_dump error - out of memory, Failed on request of size 536870912

begin:vcard
fn:Marcin Citowicki
n:Citowicki;Marcin
email;internet:marcin.citowicki@m4n.nl
x-mozilla-html:FALSE
version:2.1
end:vcard

Hello,

I forgot to add - all those 'out of memory' errors happen when backup db
is trying to create index. Every 'CREATE INDEX' operation is followed by
'out of memory' error.
Thanks!

Marcin


Marcin Citowicki wrote:
> Hello,
>
> We have a bit of a problem with the daily db backup. At the moment the
> backup db size is half of the main db, and we have the following
> errors in the log file:
>
> ERROR: out of memory
> DETAIL: Failed on request of size 536870912
>
> (There's a bunch of those.)
>
> There were some 'permission denied for language c' errors earlier in
> the log, but then the backup continued for a while.
>
> This is the command we're running:
>
> pg_dump -c -h $DB_HOST -U $DB_USER $DB_NAME | PGOPTIONS='-c
> maintenance_work_mem=1500MB -c sort_mem=64MB'
> psql --quiet -U $DB_DUMP_USER -h $DB_DUMP_HOST $DB_DUMP_NAME;
>
> Both databases (main and backup) are Postgres 8.3.
>
> I'd appreciate any suggestions on how to fix this problem. Also, if
> you have a better idea on running a daily backup please let me know.
> Thanks!
>
> Marcin

Re: [HACKERS] Status of DISTINCT-by-hashing work

2008/8/6 Tom Lane <tgl@sss.pgh.pa.us>:
> Gregory Stark <stark@enterprisedb.com> writes:
>> "Tom Lane" <tgl@sss.pgh.pa.us> writes:
>>> There are still two places in the system that hard-wire the use of
>>> sorting for duplicate elimination:
>>>
>>> * Set operations (UNION/INTERSECT/EXCEPT)
>
>> Egads. Are you thinking to reimplement them more in line with the way other
>> nodes work? Or just have them choose between hashing and sorting themselves?
>
> Well, actually, after looking closer I'm realizing that it's harder than
> I thought. I had been thinking that we could just have the planner
> choose whether to generate grouping instead of sorting nodes, but that
> only works for plain UNION. For INTERSECT/EXCEPT (with or without ALL),
> you really need to maintain counters in each hashtable entry so you know
> how many matching rows you got from each side of the set operation.
> So it'd be necessary to either duplicate a large chunk of nodeAgg.c, or
> make that code handle hashed INTERSECT/EXCEPT along with all its
> existing duties. Neither of which seems particularly appealing :-(.
> I'm going to look at whether nodeAgg can be refactored to avoid this,
> but I'm feeling a bit discouraged about it at the moment.

In working on window functions, I also found that nodeWindow.c
duplicates much of nodeAgg.c, which contains not only aggregates but
reading ahead until next group.

Additionally, not having implemented but planned, frame concept that
slides aggregates within a partition will require multiple saved
positions of tuplestore. Up to now Tuplestore has functionality to
mark/restore pos but it is only one chance, which means when you mark
a pos the previous pos cannot be restore anymore. The window frame
will need to do mark multiple times and to restore older ones.

>> Any idea what would the needed executor infrastructure look like? Would it
>> have anything in common with the OLAP window functions infrastructure?
>
> Possibly; I haven't paid much attention to the OLAP work yet.
>
> regards, tom lane

In my patch nodeWindow.c, some functions reach for its parent state
node to get info of sort keys by using fcinfo->context. This works but
is completely ugly. At least, window functions need ability to reach
for key or whole tuple of current or of offset (preceding/following)
row. If another feature like DISTINCT needs similar one, I am
encouraged to give more opinion.

Regards,

--
Hitoshi Harada

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] plan invalidation vs stored procedures

2008/8/6 Hannu Krosing <hannu@krosing.net>:
> On Tue, 2008-08-05 at 16:17 +0200, Pavel Stehule wrote:
>> 2008/8/5 Asko Oja <ascoja@gmail.com>:
>> > postgres=# create or replace function pavel ( i_param text, status OUT int,
>> > status_text OUT text ) returns record as $$ select 200::int, 'ok'::text; $$
>> > language sql;
>> > CREATE FUNCTION
>> > postgres=# create or replace function pavel ( i_param text, status OUT int,
>> > status_text OUT text, more_text OUT text ) returns record as $$ select
>> > 200::int, 'ok'::text, 'tom'::text; $$ language sql;
>> > ERROR: cannot change return type of existing function
>> > DETAIL: Row type defined by OUT parameters is different.
>> > HINT: Use DROP FUNCTION first.
>> >
>> > On Tue, Aug 5, 2008 at 5:04 PM, Asko Oja <ascoja@gmail.com> wrote:
>> >>
>> >> > This is simply a bad, wrong, stupid way to do it. Why do you not use
>> >> > CREATE OR REPLACE FUNCTION?
>> >> I totally agree we should get this fixed first :)
>> >>
>> >> postgres=# create or replace function pavel ( i_param text, status OUT
>> >> int, status_text OUT text ) returns record as $$ select 200::int,
>> >> 'ok'::text; $$ language sql;
>> >> ERROR: cannot change return type of existing function
>> >> HINT: Use DROP FUNCTION first.
>> >>
>>
>> you cannot change header of function. It's same as change C header of
>> function without complete recompilation.
>
> SQL is not C.
>
> You don't have to recompile the whole SQL database when you add columns
> to tables, so why should you need to do it, when you add a column to
> table-returning function ?
>

I thing, it's possible - but it's neccessary completly new dictionary
with dependencies (some dependencies are dynamic - polymorphic
functions) so it's dificult task.

Pavel

> --------------
> Hannu
>
>
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

--On Mittwoch, August 06, 2008 11:10:15 +0200 Markus Wanner
<markus@bluegap.ch> wrote:

> Hi,
>
> Bernd Helmle wrote:
>> In PostgreSQL sollte es eigentlich auf Datenbankebene keine Unterschiede
>> zu beiden geben, geschwindigkeitsmäßig.
>
> Das sehe ich anders: waehrend Du mit SELECT FOR UPDATE ein tuple
> 'lockst', auch wenn dies in der gossen Mehrheit der Faelle nicht noetig
> waere (pessimistic locking: um sicher zu gehen, dass nix schief geht,
> sperrst Du lieber zuviel als zu wenig) entdeckt der SERIALIZABLE
> ISOLATION LEVEL Konflikte auch ohne diese Locks (optimistic locking: nur
> die absolut notwendigen locks werden gesperrt).
>
> Solange also nur wenige Konflikte wegen Nebenlaeufigkeit zu erwarten sind
> - was fuer die allermeisten Applikationen zutreffen duerft - dann ist der
> SERIALIZABLE mode also performanter.

Rowlocks in PostgreSQL werden über die XID realisiert und direkt im Tuple
Header gespeichert. Das alles ist sehr leichtfüßig, auch im SERIALIZABLE
Mode funktioniert das genauso, nur das er halt in dem Moment net sperrt
sondern einen Transaktionsfehler wirft, sollte XMAX bereits anderweitig
modifiziert worden sein. Die Unterschiede betreffen hier hauptsächlich die
Art und Weise wie Snapshots erstellt werden, PostgreSQL ist hier sehr
intelligent implementiert, aber tatsächliche Performanceunterschiede zw.
READ COMMITTED und SERIALIZABLE ergeben sich wohl in der Art und Weise wie
die Anwendung mit beiden umzugehen hat. Rein von der Datenbankengine
gesehen müsste SERIALIZABLE sogar schneller sein, da es weniger Snapshots
erzeugen muß!

Im übrigen bin ich absolut deiner Meinung, und Exceptions sind nichts
schlimmes.


--
Thanks

Bernd

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[PERFORM] pg_dump error - out of memory, Failed on request of size 536870912

begin:vcard
fn:Marcin Citowicki
n:Citowicki;Marcin
email;internet:marcin.citowicki@m4n.nl
x-mozilla-html:FALSE
version:2.1
end:vcard

Hello,

We have a bit of a problem with the daily db backup. At the moment the
backup db size is half of the main db, and we have the following errors
in the log file:

ERROR: out of memory
DETAIL: Failed on request of size 536870912

(There's a bunch of those.)

There were some 'permission denied for language c' errors earlier in the
log, but then the backup continued for a while.

This is the command we're running:

pg_dump -c -h $DB_HOST -U $DB_USER $DB_NAME | PGOPTIONS='-c
maintenance_work_mem=1500MB -c sort_mem=64MB'
psql --quiet -U $DB_DUMP_USER -h $DB_DUMP_HOST $DB_DUMP_NAME;

Both databases (main and backup) are Postgres 8.3.

I'd appreciate any suggestions on how to fix this problem. Also, if you
have a better idea on running a daily backup please let me know.
Thanks!

Marcin

Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

--On Mittwoch, August 06, 2008 11:07:52 +0200 Albe Laurenz
<laurenz.albe@wien.gv.at> wrote:


>
> Nanana, das ist jetzt aber frei fabuliert.
>
> Die Doku sagt:
> A block containing an EXCEPTION clause is significantly more expensive
> to enter and exit than a block without one.
> Therefore, don't use EXCEPTION without need.
>
> Daß da ein Memory Leak in der Datenbank ist, kann ich dem nicht
> entnehmen ...

Das hat auch keiner behauptet ;)


--
Thanks

Bernd

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [HACKERS] plan invalidation vs stored procedures

On Tue, 2008-08-05 at 16:17 +0200, Pavel Stehule wrote:
> 2008/8/5 Asko Oja <ascoja@gmail.com>:
> > postgres=# create or replace function pavel ( i_param text, status OUT int,
> > status_text OUT text ) returns record as $$ select 200::int, 'ok'::text; $$
> > language sql;
> > CREATE FUNCTION
> > postgres=# create or replace function pavel ( i_param text, status OUT int,
> > status_text OUT text, more_text OUT text ) returns record as $$ select
> > 200::int, 'ok'::text, 'tom'::text; $$ language sql;
> > ERROR: cannot change return type of existing function
> > DETAIL: Row type defined by OUT parameters is different.
> > HINT: Use DROP FUNCTION first.
> >
> > On Tue, Aug 5, 2008 at 5:04 PM, Asko Oja <ascoja@gmail.com> wrote:
> >>
> >> > This is simply a bad, wrong, stupid way to do it. Why do you not use
> >> > CREATE OR REPLACE FUNCTION?
> >> I totally agree we should get this fixed first :)
> >>
> >> postgres=# create or replace function pavel ( i_param text, status OUT
> >> int, status_text OUT text ) returns record as $$ select 200::int,
> >> 'ok'::text; $$ language sql;
> >> ERROR: cannot change return type of existing function
> >> HINT: Use DROP FUNCTION first.
> >>
>
> you cannot change header of function. It's same as change C header of
> function without complete recompilation.

SQL is not C.

You don't have to recompile the whole SQL database when you add columns
to tables, so why should you need to do it, when you add a column to
table-returning function ?

--------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] plan invalidation vs stored procedures

On Tue, 2008-08-05 at 16:16 +0200, Pavel Stehule wrote:
> 2008/8/5 Martin Pihlak <martin.pihlak@gmail.com>:
> >>> DROP FUNCTION
> >>> create function foo() returns integer as $$ begin return 2; end; $$ language plpgsql;
> >>> CREATE FUNCTION
> >>> execute c1;
> >>> psql:test.sql:11: ERROR: cache lookup failed for function 36555
> >>
> >> This is simply a bad, wrong, stupid way to do it. Why do you not use
> >> CREATE OR REPLACE FUNCTION?
> >>
> >
> > Well, the test case was an illustration. The actual reason for DROP and CREATE is
> > the inability to change function return type. In our case there are plpgsql OUT
> > parameters involved, and there is no other way to add additional OUT parameters
> > without dropping the function first. I'd be glad if this was fixed, but I still
> > think that proper plan invalidation for function changes is needed (inlined
> > functions, ALTER FUNCTION stuff etc.)
>
> It isn't possible. Probably some wrong is in your database design.

Yup. It is called evolving a design.

Them stupid people did not design all their possible future uses of
functions properly at first try.

I'm sure that it is possible to work around postgreSQL's inability to
properly invalidate plans by treating SQL as C++, where every change
needs a complete recompile & restart, but that enforces an unneccessary
cost in downtime.

-------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-de-allgemein] Re: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

Hi,

Bernd Helmle wrote:
> In PostgreSQL sollte es eigentlich auf Datenbankebene keine Unterschiede
> zu beiden geben, geschwindigkeitsmäßig.

Das sehe ich anders: waehrend Du mit SELECT FOR UPDATE ein tuple
'lockst', auch wenn dies in der gossen Mehrheit der Faelle nicht noetig
waere (pessimistic locking: um sicher zu gehen, dass nix schief geht,
sperrst Du lieber zuviel als zu wenig) entdeckt der SERIALIZABLE
ISOLATION LEVEL Konflikte auch ohne diese Locks (optimistic locking: nur
die absolut notwendigen locks werden gesperrt).

Solange also nur wenige Konflikte wegen Nebenlaeufigkeit zu erwarten
sind - was fuer die allermeisten Applikationen zutreffen duerft - dann
ist der SERIALIZABLE mode also performanter.

> Ich weiß, das SAVEPOINTs Shared
> Memory "fressen", aber "fressen" ist relativ.

Uh.. ich wuerde das ganz ohne SAVEPOINT machen...

> Wer sowas tausendfach in
> einer Schleife macht, muß damit rechnen, irgendwann an die Grenzen der
> Ressourcen zu stoßen.

Huh? Wenn Du einen kompletten ROLLBACK der Transaktion machst, dann
werden die Resources fuer die SAVEPOINTs freigegeben. Zudem: wieso
tausendfach? Erwartest Du wirklich so viele Konflikte?

> Dies innerhalb der Datenbank zu kapseln ist
> deutlich schöner als mit fehlerträchtigen Retry-Loops der Applikation zu
> spielen.

Schoener? Keine Ahnung, auf jeden Fall weniger solide. Die Retry-Loop
ist jedenfalls die empfohlene Vorgehensweise, siehe [1]. Sie ist auch
fuer den READ COMMITTED ISOLATION LEVEL noetig, weil auch dort Deadlocks
auftreten koennen. (Ausser Du stellst sicher, dass Deine Transaktionen
sich nie gegenseitig behindern, sondern die Locks immer in derselben
Reihenfolge sperren. Zugegeben, fuer einfache Anwendungen ist das meist
der Fall). Ganz zu schweigen von weiteren moeglichen Fehlerquellen.

Ueblich ist das leider nicht. Gerade z.B. Web-Anwendungen verlagern die
Retry-Loop gerne zum User und vertrauen darauf, dass wenn diesem
seltenerweise ein unverstaendlicher Datenbank Fehler entgegen knallt,
dieser einfach nochmals probieren wird. Funktioniert auch.

Wieso Retry-Loops fehlertraechtig sein sollen ist mir schleierhaft, die
Loop dient ja genau dazu, Fehler abzufangen und entsprechend darauf zu
reagieren. Natuerlich solltest Du die Endlosschleife vermeiden, aber das
versteht sich ja wohl von selbst.

Gruesse

Markus Wanner

[1]: Tom Lane: PostgreSQL Concurrency Issues:
http://www.postgresql.org/files/developer/concurrency.pdf

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[pgsql-de-allgemein] RE: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

Bernd Helmle schrieb:
> > Im READ COMMITTED mode waere die Alternative noch, mit SELECT .. FOR
> > UPDATE zu testen, ob's solch ein Tuple schon gibt. Wenn ja, ein UPDATE
> > drauf, wenn nein den INSERT. Weniger performant als SERIALIZABLE
> > (pessimistic locking), aber Du sparst Dir damit evtl. den retry-loop in
> > der App (zumindest wenn Du auch noch darauf achtest, keine deadlocks zu
> > provozieren).
>
> In PostgreSQL sollte es eigentlich auf Datenbankebene keine Unterschiede zu
> beiden geben, geschwindigkeitsmäßig. Ich weiß, das SAVEPOINTs Shared
> Memory "fressen", aber "fressen" ist relativ. Wer sowas tausendfach in
> einer Schleife macht, muß damit rechnen, irgendwann an die Grenzen der
> Ressourcen zu stoßen. Dies innerhalb der Datenbank zu kapseln ist deutlich
> schöner als mit fehlerträchtigen Retry-Loops der Applikation zu spielen.

Nanana, das ist jetzt aber frei fabuliert.

Die Doku sagt:
A block containing an EXCEPTION clause is significantly more expensive
to enter and exit than a block without one.
Therefore, don't use EXCEPTION without need.

Daß da ein Memory Leak in der Datenbank ist, kann ich dem nicht entnehmen ...

Es ist mehr eine Performance-Angelegenheit.

Liebe Grüße,
Laurenz Albe

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[COMMITTERS] pgbouncer - pgbouncer: tag 1.2.2

Log Message:
-----------
tag 1.2.2

Modified Files:
--------------
pgbouncer:
NEWS (r1.27 -> r1.28)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/NEWS.diff?r1=1.27&r2=1.28)
configure.ac (r1.41 -> r1.42)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/configure.ac.diff?r1=1.41&r2=1.42)
pgbouncer/debian:
changelog (r1.15 -> r1.16)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgbouncer/pgbouncer/debian/changelog.diff?r1=1.15&r2=1.16)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

Markus Wanner schrieb:
> > Wenn man die Transaktion in Session A serealisierbar macht
> > (START TRANSACTION ISOLATION LEVEL SERIALIZABLE in Session A),
> > wird der Erfolg auch nicht glücklich machen, dann bekommt die
> > Session A einen Error 40001.
>
> ..was doch ein SERIALIZATION FAILURE, und deshalb absolut korrekt ist an
> der Stelle. Bau auf Seite der Applikation einen retry-loop um die ganze
> Veranstaltung und gut (und performant).

Ja, das ist völlig korrekt.
Ich wollte ja nur erwähnen, daß es noch ein paar andere Dinge zu
bedenken gibt.

Und mit einem nachträglichen Test könnte man auch die
"UPDATE first"-Methode krisensicher machen: Aktion wiederholen,
wenn das INSERT eine primary key violation ergibt.
Auch wenn das vielleicht weniger hübsch ist.

> Im READ COMMITTED mode waere die Alternative noch, mit SELECT .. FOR
> UPDATE zu testen, ob's solch ein Tuple schon gibt. Wenn ja, ein UPDATE
> drauf, wenn nein den INSERT. Weniger performant als SERIALIZABLE
> (pessimistic locking), aber Du sparst Dir damit evtl. den retry-loop in
> der App (zumindest wenn Du auch noch darauf achtest, keine deadlocks zu
> provozieren).

Session A: Session B:

START TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT 1 FROM test WHERE id = 1 FOR UPDATE;

?column?
----------
(0 rows)
START TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT 1 FROM test WHERE id = 1 FOR UPDATE;

?column?
----------
(0 rows)

INSERT INTO test (id, val) VALUES (1, 'Session a');
COMMIT;

INSERT INTO test (id, val) VALUES (1, 'Session b');

ERROR: duplicate key value violates unique constraint "test_pkey"

Liebe Grüße,
Laurenz Albe

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

--On Mittwoch, August 06, 2008 09:22:53 +0200 Albe Laurenz
<laurenz.albe@wien.gv.at> wrote:

> Wenn man die Transaktion in Session A serealisierbar macht
> (START TRANSACTION ISOLATION LEVEL SERIALIZABLE in Session A),
> wird der Erfolg auch nicht glücklich machen, dann bekommt die
> Session A einen Error 40001.

Bei SERIALIZABLE muß die Anwendung darauf vorbereitet sein, die Aktion zu
wiederholen.

--
Thanks

Bernd

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

--On Mittwoch, August 06, 2008 10:09:24 +0200 Markus Wanner
<markus@bluegap.ch> wrote:

> Im READ COMMITTED mode waere die Alternative noch, mit SELECT .. FOR
> UPDATE zu testen, ob's solch ein Tuple schon gibt. Wenn ja, ein UPDATE
> drauf, wenn nein den INSERT. Weniger performant als SERIALIZABLE
> (pessimistic locking), aber Du sparst Dir damit evtl. den retry-loop in
> der App (zumindest wenn Du auch noch darauf achtest, keine deadlocks zu
> provozieren).

In PostgreSQL sollte es eigentlich auf Datenbankebene keine Unterschiede zu
beiden geben, geschwindigkeitsmäßig. Ich weiß, das SAVEPOINTs Shared
Memory "fressen", aber "fressen" ist relativ. Wer sowas tausendfach in
einer Schleife macht, muß damit rechnen, irgendwann an die Grenzen der
Ressourcen zu stoßen. Dies innerhalb der Datenbank zu kapseln ist deutlich
schöner als mit fehlerträchtigen Retry-Loops der Applikation zu spielen.

--
Thanks

Bernd

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[pgsql-de-allgemein] Re: [pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

Hallo,

Albe Laurenz wrote:
> Wenn man die Transaktion in Session A serealisierbar macht
> (START TRANSACTION ISOLATION LEVEL SERIALIZABLE in Session A),
> wird der Erfolg auch nicht glücklich machen, dann bekommt die
> Session A einen Error 40001.

..was doch ein SERIALIZATION FAILURE, und deshalb absolut korrekt ist an
der Stelle. Bau auf Seite der Applikation einen retry-loop um die ganze
Veranstaltung und gut (und performant).

Im READ COMMITTED mode waere die Alternative noch, mit SELECT .. FOR
UPDATE zu testen, ob's solch ein Tuple schon gibt. Wenn ja, ein UPDATE
drauf, wenn nein den INSERT. Weniger performant als SERIALIZABLE
(pessimistic locking), aber Du sparst Dir damit evtl. den retry-loop in
der App (zumindest wenn Du auch noch darauf achtest, keine deadlocks zu
provozieren).

Gruesse

Markus Wanner

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [GENERAL] FTS on patterns

Vyacheslav Kalinin wrote:
> On Tue, Aug 5, 2008 at 6:53 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
>> There's support for prefix text search in CVS HEAD, but not in any
>> released version ...
>>
>> regards, tom lane
>
> Oh, does that mean it won't be released until 8.4?

New features are never introduced in minor releases (8.3.x) only bugfixes.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Update tsvector trigger

x asasaxax wrote:
> Hi,
>
> i´m trying to do a trigger that its called when update or insert, that
> update the tsvectors, for text-search. Here´s my code:

> CREATE OR REPLACE FUNCTION atualiza_vectors() RETURNS trigger AS $$
> BEGIN
> IF NEW.texto<>NULL THEN
> UPDATE x SET vectors = to_tsvector(lower(to_ascii(NEW.texto)))
> where cod= NEW.cod;
> END IF;
> RETURN NEW;
> END;
> $$ LANGUAGE plpgsql;
> CREATE TRIGGER atualiza_vectors AFTER INSERT OR UPDATE ON x
> FOR EACH ROW EXECUTE PROCEDURE atualiza_vectors();
>
>
> When the trigger its called, postgre shows the following error: "stack depth
> limit exceeded".

You're generating an UPDATE every time the trigger is called. That will
fire another trigger, which will generate another UPDATE, which will
fire another trigger, which will...

Things to change:
1. Use a BEFORE not an AFTER trigger
2. Just set NEW.vectors := ...

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] Pug mailing list archive trucated

On Wed, Aug 6, 2008 at 8:40 AM, Magnus Hagander <magnus@hagander.net> wrote:
> Dave Page wrote:
>> On Wed, Aug 6, 2008 at 5:34 AM, Richard Broersma
>> <richard.broersma@gmail.com> wrote:
>>> Since the "Baltimore/Washington DC" hyperlink was added to the left
>>> pane on the web page, all of the archived mail is partially left
>>> truncated from being viewed ( at least when viewed with IE7).
>>>
>>> Here is an example.
>>>
>>>
>>> http://archives.postgresql.org/lapug/2008-08/msg00000.php
>>
>> Yeuch - that really does screw up IE. Any objection to Baltimore/DC?
>
> Go for it. Should be obvious enough to anybody from the region (e.g. the
> US), and those are the interested parties after all :-)

True. Changed.


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-www] Pug mailing list archive trucated

Dave Page wrote:
> On Wed, Aug 6, 2008 at 5:34 AM, Richard Broersma
> <richard.broersma@gmail.com> wrote:
>> Since the "Baltimore/Washington DC" hyperlink was added to the left
>> pane on the web page, all of the archived mail is partially left
>> truncated from being viewed ( at least when viewed with IE7).
>>
>> Here is an example.
>>
>>
>> http://archives.postgresql.org/lapug/2008-08/msg00000.php
>
> Yeuch - that really does screw up IE. Any objection to Baltimore/DC?

Go for it. Should be obvious enough to anybody from the region (e.g. the
US), and those are the interested parties after all :-)

//Magnus

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

[pgsql-de-allgemein] RE: [pgsql-de-allgemein] In Funktion prüfen ob Zeile existiert

Bernd Helmle schrieb:
> > Aufgrund einer Abneigung gegen Exceptions schreibe ich so
> > etwas meist in der Form:
> >
> >| UPDATE Table SET Something = 'something' WHERE ID = 'id';
> >| IF NOT FOUND THEN
> >| INSERT INTO Table (ID, Something) VALUES ('id', 'something');
> >| END IF;
>
> ...was aber bei Nebenläufigkeit ohne explizites Locking zu Race Conditions
> führt. Was ist das Problem an Exceptions?

Sie fressen (laut Doku) Performance, denn intern werden sie mit Savepoints
implementiert.

Es stimmt allerdings, daß zwei solche Statements wie oben einander in die
Quere kommen können.

Allerdings lassen sich auch mit der "INSERT zuerst"-Methode Race Conditions
mit anderen Statements nicht ohne weiteres vermeiden:

CREATE TABLE test (id integer PRIMARY KEY, val text);
INSERT INTO test (id, val) VALUES (1, 'Wert');

Session A: Session B:

START TRANSACTION;
SAVEPOINT a;
INSERT INTO test (id, val) VALUES (1, 'Session a');
(ERROR: duplicate key value violates unique constraint "test_pkey")

DELETE FROM test WHERE id = 1;

ROLLBACK TO a;
UPDATE test SET val = 'Session a' WHERE id = 1;
(UPDATE 0)
COMMIT;

Wenn man die Transaktion in Session A serealisierbar macht
(START TRANSACTION ISOLATION LEVEL SERIALIZABLE in Session A),
wird der Erfolg auch nicht glücklich machen, dann bekommt die
Session A einen Error 40001.

Liebe Grüße,
Laurenz Albe

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [pgsql-www] Pug mailing list archive trucated

On Wed, Aug 6, 2008 at 5:34 AM, Richard Broersma
<richard.broersma@gmail.com> wrote:
> Since the "Baltimore/Washington DC" hyperlink was added to the left
> pane on the web page, all of the archived mail is partially left
> truncated from being viewed ( at least when viewed with IE7).
>
> Here is an example.
>
>
> http://archives.postgresql.org/lapug/2008-08/msg00000.php

Yeuch - that really does screw up IE. Any objection to Baltimore/DC?

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

RE: [pgsql-es-ayuda] [TOTALMENTE OT] .Net

Hola Gabriel,

No voy a entrar en la polémica servida sobre este hilo. Pero espero que con
este buclecito te baste (aquí estamos para ayudar):

foreach (Control ctr in this.Controls)
{
if (ctr is System.Windows.Forms.TextBox) ctr.Visible = false;
}

Comentarte nada mas, que todos los controles de formulario heredan del
objeto "Control" y comparten muchas propiedades, entre ellas "Visible".

Un saludo.

>-----Mensaje original-----
>De: pgsql-es-ayuda-owner@postgresql.org
>[mailto:pgsql-es-ayuda-owner@postgresql.org] En nombre de
>Javier Chávez B.
>Enviado el: miércoles, 06 de agosto de 2008 1:45
>Para: hermeszambra@yahoo.com
>CC: Carolina Roman Salgado; pgsql-es-ayuda@postgresql.org
>Asunto: Re: [pgsql-es-ayuda] [TOTALMENTE OT] .Net
>
>2008/8/5 Gabriel Hermes Colina Zambra <hermeszambra@yahoo.com>:
>>
>>>
>>> Ufff. .. mira efectivamente lei el correo y puede que a
>simple vista
>>> no se entienda, pero disculpame soy participante de esta
>lista desde
>>> hace años y es primera vez que se ataca un correo de esta
>manera, de
>>> echo la politica ha sido siempre de respeto y de ayuda al
>resto y no
>>> de idioteces como esta, si no date una vuelta en el
>historial de una
>>> vez que se trato el tema Top - Posting, es por eso que me irrita de
>>> sobremanera aquellos personajes como estos, que en realidad no
>>> aportan en nada.
>> Primero, no es la primera vez, me costo tiempo entenderlo,
>pero yo como Microsoft Certified, al principio cometi un
>monton de errores y recibi respuestas duras, viole por
>ingorancia la netiquette, cosa que creo deberias leer antes de
>participar en un foro. El error que creo nunca cometi fue
>atacar a alguien sin argumentos e insultar a la gente con la
>arrogancia que tu lo haces.
>>
>>
>>
>>
>>>Bueno pero creo que al resto de la lista este tipo de
>inconvenientes
>>>no le interesan.
>>
>> La verdad que a la lista no le interesa ni eso ni los top posting.
>>
>>>
>>> Mi pregunta va por el lado .Net por eso puse en mi Topic:
>>> Totalmente
>>> OT, para que se entienda que no es una duda PG pero aqui podemos
>>> encontrar profesional de las mas diversas caracteristicas y
>perfiles.
>>
>> Mejor pregunta en www.elguille.info ese seria uno de los lugares
>> adecuados o en la base de conocimientos de microsoft, por aqui podes
>> ir en castellano http://support.microsoft.com/hublist
>>
>> Con gusto te contesto, pero no podes reprochar que en una
>lista de interes de PostgreSQL, relacionada con el Software
>Libre, vos busques informacion gratis de un producto del que
>deberias tener soporte, si es que lo tienes legal, pues
>estamos contra la pirateria, por eso optamos por el SL.
>>
>>
>>> Ahora mi duda va por lo siguiente, lo que sucede es que hace dias
>>> que estoy tratando de recorrer un formulario de manera
>automatica, es
>>> decir, pasarle a una funcion un formulario y de acuerdo al tipo de
>>> objeto que contenga poder acceder a las propiedades de un objeto en
>>> particular, de ahi a que pegue mi trozo de codigo.
>>>
>>> O sea simplificando,
>>>
>>> funcion <algo> <parametro_formulario> for cont = o hasta
>>> <total_objetos_formulario>
>>> if <formulario(indice). tipo> = <campo_texto>
>>> ' hacer algo
>>>
>>> Se entiende??? sino no se preocupen, no seguire creando polemica.
>>>
>>> Desde ya agradecido a todos menos a uno.
>>
>> Y por que no te creas un frame y los tratas todos en un frame
>> frame.visible=true o frame.enable=true frame.visible=false o
>> frame.enable=false
>>
>> O la opcion que tengas.
>>
>> No te preocupes por crear polemica, algunos hicimos mucho
>ruido en ciertas oportunidades, pero afloja con hacerte la
>victima, si tu mismo admitis que ni siquiera te preocupaste de
>redactar el correo para que sea entendible, despues ese correo
>es off topic 100%, el hecho de que te arroges antiguedad en la
>lista no te hace diferente o notable, empeora las cosas,
>puesto que podrias haber aprendido de que la gente que viene
>aca le interesa leer y contestar con algo por lo menos
>minimamente relacionado al tema principal.
>>
>> Y por ultimo, si te equivocaste, no vengas con la arrogancia
>de responder a otro en la lista, que quiza de forma poco
>politica te da una respuesta que tiene sentido.
>>
>> En resumen, deberia leer la netiquette, para participar en
>foros y no hecharle la culpa a los de las listas, por que tu
>no tienes un minimo de nocion de como comportarte en ellas.
>>
>> Atte.
>> Gabriel Colina
>>
>>>
>>> Javier
>>> --
>>> TIP 1: para suscribirte y desuscribirte, visita
>>> http://archives.postgresql.org/pgsql-es-ayuda
>>
>>
>>
>>
>______________________________________________________________________
>> ______________ Yahoo! MTV Blog & Rock >¡Cuéntanos tu
>historia, inspira
>> una canción y gánate un viaje a los Premios MTV! Participa aquí
>> http://mtvla.yahoo.com/
>>
>
>Favor cerrar hilo , si ya me quedo claro todo, en realidad
>pido disculpas cometi un error, y no se hable mas del asunto,
>solo que este tipo de problemas hace que acepte muchas de las
>criticas que me han echo de sw libre ( poca paciencia y
>elitismo, si aunque no lo crean) .
>
>En todo caso no fui yo el que comenzo con insultos ni
>descalificaciones, no me hago la victima ni mucho menos sr.
>colima, efectivamente reconoci que quiza a simple vista su
>redaccion no era la mejor, pero no era para ser tratado como
>un idiota ( eso si es ser arrogante, cuando he respondido
>preguntas mucho mas idiotas de la que hice).
>
>Respecto a sus sites que recomienda, como idiota que soy y
>novato me va a creer que los conozco pero no encontre mi
>sencilla pregunta de todas formas le agradezco el tiempo y la
>dedicacion de su correo.
>
>Hasta luego.
>
>Javier Chavez
>
>
>
>--
>----------------------
>Slds.
>jchavez
>linux User #397972 on http://counter.li.org/
>--
>TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?
> http://archives.postgresql.org/pgsql-es-ayuda
>

--
TIP 1: para suscribirte y desuscribirte, visita http://archives.postgresql.org/pgsql-es-ayuda