Thursday, September 4, 2008

Re: [HACKERS] Extending grant insert on tables to sequences

On Wed, Sep 3, 2008 at 7:03 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> In short, this patch isn't much more ready to commit than it was
> in the last fest.
>

Just for the record, i put this updated patch just because there were
an entry for "Extending grant insert on tables to sequences" for this
Commit Fest without being an updated patch

--
regards,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. (593) 87171157

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] StartupCLOG

On Thu, 2008-09-04 at 12:18 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndQuadrant.com> writes:
> > I was thinking about what happens when you are performing a PITR using
> > log records that contain a crash/recovery/shutdown checkpoint sequence.
>
> > I take it there's no problem there?
>
> I don't really see one.

OK, cool. I'm just trying to shake out all the possible problems, so
sorry if this one was a false positive.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] limit clause breaks query planner?

On Thu, 4 Sep 2008, Guillaume Cottenceau wrote:
> It seems to me that if the correlation is 0.99, and you're
> looking for less than 1% of rows, the expected rows may be at the
> beginning or at the end of the heap?

Not necessarily. Imagine for example that you have a table with 1M rows,
and one of the fields has unique values from 1 to 1M, and the rows are
ordered in the table by that field. So the correlation would be 1. If you
were to SELECT from the table WHERE the field = 500000 LIMIT 1, then the
database should be able to work out that the rows will be right in the
middle of the table, not at the beginning or end. It should set the
startup cost of a sequential scan to the amount of time required to
sequential scan half of the table.

Of course, this does bring up a point - if the matching rows are
concentrated at the end of the table, the database could perform a
sequential scan backwards, or even a scan from the middle of the table
onwards.

This improvement of course only actually helps if the query has a LIMIT
clause, and presumably would muck up simultaneous sequential scans.

Matthew

--
Picard: I was just paid a visit from Q.
Riker: Q! Any idea what he's up to?
Picard: No. He said he wanted to be "nice" to me.
Riker: I'll alert the crew.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] StartupCLOG

Simon Riggs <simon@2ndQuadrant.com> writes:
> I was thinking about what happens when you are performing a PITR using
> log records that contain a crash/recovery/shutdown checkpoint sequence.

> I take it there's no problem there?

I don't really see one. I believe the reason for the StartupCLOG action
is just to make sure that clog doesn't claim that any transactions are
committed that weren't committed according to the WAL, or more precisely
by the portion of WAL we chose to read. Consider PITR stopping short of
the actual WAL end: it would clearly be possible that the current page
of clog says that some "future" transactions are committed, but in our
new database history we don't want them to be so. I think that the code
is also trying to guard against a similar situation in a crash where WAL
has been damaged and can't be read all the way to the end.

Since the PITR slave isn't going to make any changes to clog in the
first place that it isn't told to by WAL, it's hard to see how any
divergence would arise. It could diverge when the slave stops slaving
and goes live, but at that point it's going to do StartupCLOG itself.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] default current_time

2008/9/3 José Fermín Francisco Ferreras <josefermin54@hotmail.com>:
>
> Donde se ubica el archivo q se genera de log_destination=evenlog, para poder decirte.
>

log_destination=eventlog
(te falta una t)

busca el resultado del log en panel de control -> herramientas
administrativas -> visor de sucesos

> A proposito, se usaria comillas dobles o sencillas en timezone = 'gmt+4' o timezone = "gmt+4"??
>

yo lo tengo asi:
timezone = 'GMT+5'

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 4: No hagas 'kill -9' a postmaster

Re: [PERFORM] limit clause breaks query planner?

Matthew Wakeling <matthew 'at' flymine.org> writes:

> On Thu, 4 Sep 2008, Tom Lane wrote:
>> Ultimately the only way that we could get the right answer would be if
>> the planner realized that the required rows are concentrated at the end
>> of the table instead of being randomly scattered. This isn't something
>> that is considered at all right now in seqscan cost estimates. I'm not
>> sure offhand whether the existing correlation stats would be of use for
>> it, or whether we'd have to get ANALYZE to gather additional data.
>
> Using the correlation would help, I think, although it may not be the
> best solution possible. At least, if the correlation is zero, you
> could behave as currently, and if the correlation is 1, then you know
> (from the histogram) where in the table the values are.

It seems to me that if the correlation is 0.99[1], and you're
looking for less than 1% of rows, the expected rows may be at the
beginning or at the end of the heap?

Ref:
[1] or even 1, as ANALYZE doesn't sample all the rows?

--
Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company
Av. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[ADMIN] Help!

 Hi! I have bought a WS 444 PC Weather Station.
In order to use the device, a software called WeatherProfessional was
included in the package.
That software uses the postgresql service. I also got the PostgreSQL 8.0
with the package. My problem is that I cannot get it to work properly.
Firstly, if I'm not mistaken, the 8.0 version does'nt run on Windows XP.
Therefor I tried to download the 8.3.3 version off of your site. I did not
have any problems installing that, but then when I tried to run the
WeatherProfessional, it said I had the wrong username or password. There are
no such information to find anywhere in the package i bought, it simply said
that the installation would pretty much install itself, and no such
information was needed for the install.
What will I have to do in order to get this working?
 
Thank you!
Christian Larsen
larsen7557@hotmail.com



Kolla på video med dina Messenger-polare! Messenger TV

Re: [GENERAL] Changes for version 8.4

On Thu, 2008-09-04 at 10:45 -0400, Alvaro Herrera wrote:
> Joao Ferreira gmail escribió:
> > Is there a date for the release of 8.4 ?
>
> http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Development_Plan

/me notes that noone responded like "It will be released when it is
ready".

--
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [GENERAL] You need to rebuild PostgreSQL using --with-libxml.

Hi,

On Thu, 2008-09-04 at 10:18 -0430, Ricardo Antonio Yepez Jimenez wrote:
> Buenos Dias, necesito saber los pasos para recompilar con soporte
> para
> xml, en redhat 4 entreprise y postgres 8.3.

You cannot compile PostgreSQL 8.3 on RHEL 4 with xml support -- unless
you install libxml2 from sources. RHEL ships 2.6.16 version of libxml,
but PostgreSQL requires 2.6.23 at least.

Regards,
--
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [HACKERS] Debugging methods

M2Y <mailtoyahoo@gmail.com> writes:
> I am a beginner to Postgres and I am going through code. I would like
> to know the debugging methods used in development.

> Some of my requirements are; for a given query, how parse structures
> are created in pg_parse_query, how they are analyzed and rewritten in
> pg_analyze_and_rewrite and how the final plan is created in
> pg_plan_queries.

What I tend to do when trying to debug those areas is to set breakpoints
at interesting places with gdb, and then use commands like
"call pprint(node_pointer)" to dump the contents of specific parse or
plan trees to the postmaster log. The reason that outfuncs.c supports
so many node types (many that can't ever appear in stored rules) is
exactly to make it useful for examining internal data structures this
way.

Another possibility is to turn on debug_print_plan and so on, but those
settings only show you the finished results of parsing or planning,
which isn't real helpful for understanding how the code gets from point
A to point B.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Partitions number limitation ?

s.caillet@free.fr wrote:
> Is there some kind of limit in postgresql about the number of partitions ? Do
> you know some tuning in the conf files to improve postgresql management of so
> many tables ? I have already used different tablespaces, one for each main table
> and its 288 partitions.

Postgres is not really designed for performance of partitions, so you
have to manage that yourself. I am working on a project with a similar
design and found that the super table has its limitations. At some point
the db just aborts a query if there are to many partitions. I seem to
remeber I have worked with up to 100K partitions, but managed them
individually instead of through the super table.

Just a tip: if the table gets data inserted once and then mainly read
after that, its faster to create the index for the partition after the
insert.
Another tip: use COPY to insert data instead of INSERT, its about 3-5
times faster, it is supported by the C driver and a patched JDBC driver

regards

tom

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] StartupCLOG

On Thu, 2008-09-04 at 11:12 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndQuadrant.com> writes:
> > I notice that StartupCLOG zeroes out entries later than the nextxid when
> > we complete recovery in StartupXLOG, reason given is safety in case we
> > crash.
>
> > ISTM that we should also do that whenever we see a Shutdown Checkpoint
> > in WAL, since that can be caused by a shutdown immediate, shutdown abort
> > or crash.
>
> Er, what? The definition of a crash is the *lack* of a shutdown
> checkpoint.

Yes, but that's not what I'm saying.

I was thinking about what happens when you are performing a PITR using
log records that contain a crash/recovery/shutdown checkpoint sequence.

I take it there's no problem there?

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

On Thu, Sep 04, 2008 at 07:01:18AM -0700, Steve Atkins wrote:
> Settings in postgresql.conf are currently case-insensitive. Except
> for the units.

And, of course, filenames when you are using a case-sensitive
filesystem. Because these are things that are defined by some
convention other than the ones the PGDG made up. Since units fall
into that category, it seems to me that we're stuck with using
external conventions.

> one right now. If the answer to that is something along the lines
> of we don't support megaabits for shared_buffers, and never will because
> nobody in their right mind would ever intend to use megabits
> to set their shared buffer size... that's a useful datapoint when
> it comes to designing for usability.

And you are going to establish this worldwide convention on what
someone in right mind would do how, exactly? For instance, I think
nobody in right mind would use "KB" to mean "kilobytes". I suppose
you could get a random sample of all current Postgres users to decide
what makes sense, but then you'd have the problem of knowing whether
you had a random sample, since the population isn't obviously
identifiable. Or, we could just stick with the convention that we
already have, and write a tool that captures this an other issues.
Maybe even one that could later form the basis for an automatic tuning
advisor, as well.

The problem with appeals to common sense always turns out to be that
different people's common sense leads them to different conclusions.
(We had a devastating government in Ontario some years ago that claimed
to be doing things that were just common sense; the Province is still
cleaning up the mess.)

A

--
Andrew Sullivan
ajs@commandprompt.com
+1 503 667 4564 x104
http://www.commandprompt.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] BUG #4398: Backslashes get escaped despite of backslash_quote=off

"Rainer" <rainer@hamburg.ccc.de> writes:
> Description: Backslashes get escaped despite of backslash_quote=off

Aren't you looking for standard_conforming_strings? backslash_quote is
something else entirely, and doesn't actually do anything at all when
backslash escaping is disabled.

> Two questions:
> 1. What I actually want: Shouldn't the second statement work by
> documentation without the escape flag?

No. standard_conforming_strings has nothing to do with the behavior of
LIKE (nor does backslash_quote). They just control the initial parsing
of SQL string literals.

> 2. What I do not understand: Why does the fourth statement return a result
> as backslash_quote is off?

It looks like a perfectly good match to me.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] limit clause breaks query planner?

On Thu, 4 Sep 2008, Tom Lane wrote:
> Ultimately the only way that we could get the right answer would be if
> the planner realized that the required rows are concentrated at the end
> of the table instead of being randomly scattered. This isn't something
> that is considered at all right now in seqscan cost estimates. I'm not
> sure offhand whether the existing correlation stats would be of use for
> it, or whether we'd have to get ANALYZE to gather additional data.

Using the correlation would help, I think, although it may not be the best
solution possible. At least, if the correlation is zero, you could behave
as currently, and if the correlation is 1, then you know (from the
histogram) where in the table the values are.

Matthew

--
X's book explains this very well, but, poor bloke, he did the Cambridge Maths
Tripos... -- Computer Science Lecturer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [BUGS] BUG #4389: FATAL: could not reattach to shared memory(key=1804, addr=018E0000): 487

Hi!

The reason was a corrupted system library and "sfc /scannow" have helped me.
I wish you remember this solution and will advise it to other people.

Thanks.

--- Исходное сообщение ---
От кого: Zdenek Kotala <Zdenek.Kotala@Sun.COM>
Кому: diesel_den@ukr.net
Дата: 1 сентября, 19:55:38
Тема: Re: [BUGS] BUG #4389: FATAL: could not reattach to shared memory(key=1804, addr=018E0000): 487

could not reattach to shared memory napsal(a):
> The following bug has been logged online:
>
> Bug reference: 4389
> Logged by: could not reattach to shared memory
> Email address: diesel_den@ukr.net
> PostgreSQL version: 8.3.3-1
> Operating system: any 8.3.*
> Description: FATAL: could not reattach to shared memory (key=1804,
> addr=018E0000): 487
> Details:
>
> This error came week ago.
> From that 'black' day I can not use Postgre.
> I have reinstalled several 8.3.* versions (including last version with
> vcredist_x86.exe) and nothing helps me.
>

try to remove postgesql.pid file in the data directory.

Zdenek




--
HeadHunter:Украина - http://www.hh.ua Элитные вакансии компаний.
Создайте резюме на сайте и получите работу!

Re: [BUGS] BUG #4397: crash in tab-complete.c

Rudolf Leitgeb <r.leitgeb@x-pin.com> writes:
> Yes, libedit is used. On Mac OSX libreadline is a soft link
> to libedit, so that's what's used regardless of configure settings.

Actually, given that you got compile warnings, the thing to focus on is
probably what readline #include files were used. I'm still suspicious
of a local readline installation messing things up --- is there anything
in /usr/local/include?

What were those warnings, anyway?

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] limit clause breaks query planner?

"Matt Smiley" <mss@rentrak.com> writes:
> "Tom Lane" <tgl@sss.pgh.pa.us> writes:
>> default cost settings will cause it to prefer bitmap scan for retrieving
>> up to about a third of the table, in my experience). I too am confused
>> about why it doesn't prefer that choice in the OP's example.

> It looks like the bitmap scan has a higher cost estimate because the
> entire bitmap index must be built before beginning the heap scan and
> returning rows up the pipeline.

Oh, of course. The LIMIT is small enough to make it look like we can
get the required rows after scanning only a small part of the table,
so the bitmap scan will lose out in the cost comparison because of its
high startup cost.

Ultimately the only way that we could get the right answer would be if
the planner realized that the required rows are concentrated at the end
of the table instead of being randomly scattered. This isn't something
that is considered at all right now in seqscan cost estimates. I'm not
sure offhand whether the existing correlation stats would be of use for
it, or whether we'd have to get ANALYZE to gather additional data.

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [ADMIN] Database Conversion

>
> Hello, All,
>
> I have a new faculty member who has a large database that is
> in MySQL. We don't support MySQL so the database needs to be
> ported to PostgreSQL. Her GA, who know MySQL, says that he
> has a query that he will run that will put the data into
> postgres. I thought that the data would have to be output to
> a text file and then copied into postgres. I don't know
> MySQL. I've done a conversion from Oracle and this is how I
> did it. Is he correct that he can put the data into a
> postgres database by running a MySQL query? It doesn't sound
> possible to me.
>
> Carol
>

You could possibly do it in a single operation using MS Access if you
have an ODBC connection to each database. If however the dataset is
large, I wouldn't recommend it. I have a number of MySQL and PostgreSQL
dbs and I either dump sql and then import or use PHP scripts when moving
between the two.

Nick

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [pgeu-general] LinuxLive UK

On Thu, 2008-09-04 at 15:45 +0100, Dave Page wrote:

> I've had no volunteers to help out at this show, so unless I get at
> least three firm commitments by Friday I'll be forced to cancel our
> table :-(

Is this the same show you asked about in June and got lots of yesses?

Me? Still yes.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgeu-general mailing list (pgeu-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgeu-general

Re: [HACKERS] StartupCLOG

Simon Riggs <simon@2ndQuadrant.com> writes:
> I notice that StartupCLOG zeroes out entries later than the nextxid when
> we complete recovery in StartupXLOG, reason given is safety in case we
> crash.

> ISTM that we should also do that whenever we see a Shutdown Checkpoint
> in WAL, since that can be caused by a shutdown immediate, shutdown abort
> or crash.

Er, what? The definition of a crash is the *lack* of a shutdown
checkpoint.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] wiki.postgresql.org is awfully slow this evening

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


Dave Page wrote:

> No we haven't - not even remotely. At a rough count we'd need at least
> another 17 servers (based on the current number of VMs), and we'd lose
> the ability to move services between hardware quickly and easily. Oh,
> and we'd have the management headache of dealing with a bunch more
> hosting providers, as I doubt the current ones will give us that many
> boxes.

Well, maybe we don't need to replace all 17, just some of the more
active ones.

Joshua points out:

> That being said :) I think its a mistake. It would be a complete waste
> of resources to go to dedicated machines. Some of the machines we have
> we hardly use at this point.

Fair enough, I withdraw the dedicated box request. Can we perhaps separate
the wiki then, so we don't have a repeat of yesterday? Maybe put wiki
or git onto one of the more lightly loaded physical boxes?

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200809041059
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAki/+DoACgkQvJuQZxSWSsj22QCfZlZQ4XoxYxo+UOxXAjNeKdio
UecAoOPRoRN8EPKhgTGRINzjnAxXOJNd
=MNYh
-----END PGP SIGNATURE-----

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [ADMIN] Database Conversion

Well, my database server lives on a Solaris 10 box. I'm running
PosgreSQL 8.2.3. The database that is being converted from MySQL is
currently on a Windows machine. So far it seems that every solution
involves an interim step or two. I think he was alluding to just
running a query.

Carol
On Sep 4, 2008, at 10:33 AM, Ben Kim wrote:

>
>> I have a new faculty member who has a large database that is in
>> MySQL. We don't support MySQL so the database needs to be ported to
>> PostgreSQL. Her GA, who know MySQL, says that he has a query that he
>> will run that will put the data into postgres. I thought that the
>> data would have to be output to a text file and then copied into
>> postgres. I don't know MySQL. I've done a conversion from Oracle
>> and this is how I did it. Is he correct that he can put the data
>> into a postgres database by running a MySQL query? It doesn't sound
>> possible to me.
>
> I don't think mysql has anything that exports data into postgresql.
> Unless he is talking about the likes of DTS/SSIS or perl DBI, or
> other tools. Or the tables are simple and he thinks he can
> ingeniously craft queries and run them through pipes eventually to
> psql. DDL will be more difficult.
>
>
> Regards,
> Ben
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[GENERAL] You need to rebuild PostgreSQL using --with-libxml.

Buenos Dias, necesito saber los pasos para recompilar con  soporte para xml, en redhat 4 entreprise y postgres 8.3.

 

Gracias,


.

Re: [GENERAL] Changes for version 8.4

Joao Ferreira gmail escribió:
> Is there a date for the release of 8.4 ?

http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Development_Plan

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgeu-general] LinuxLive UK

On Thu, Aug 28, 2008 at 10:13 AM, Dave Page <dpage@pgadmin.org> wrote:
> I've mentioned previously that we have a table in the .ORG village at
> LinuxLive, Olympia, London on the 23 - 25 October.
>
> http://www.linuxexpo.org.uk/
>
> It's about time that we got organised and figured out who will be
> available to attend, and when. I can volunteer Greg and myself, but we
> need at least a few additional people to man the booth effectively
> over the three days. So, can I get a show of hands from those able to
> attend, along with how many/which days please?
>
> Thanks!

I've had no volunteers to help out at this show, so unless I get at
least three firm commitments by Friday I'll be forced to cancel our
table :-(

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgeu-general mailing list (pgeu-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgeu-general

Re: [GENERAL] Changes for version 8.4

Is there a date for the release of 8.4 ?

joao

On Thu, 2008-09-04 at 10:09 -0400, Alvaro Herrera wrote:
> paul tilles wrote:
> > Where can I find a list of changes for Version 8.4 of postgres?
>
> It's not officially written anywhere. As a starting point you can look
> here:
> http://wiki.postgresql.org/wiki/Category:CommitFest
> Then look at each Commitfest:2008:xx page, and see the list of committed
> patches. Also, note that a certain number of patches have gone in
> without being listed there (most notably, a huge improvement in how
> EXISTS queries are handled).
>
> The definitive place, of course, is the CVS logs.
>
> --
> Alvaro Herrera http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] wiki.postgresql.org is awfully slow this evening

On Thu, Sep 4, 2008 at 3:32 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
>> No we haven't - not even remotely. At a rough count we'd need at least
>> another 17 servers (based on the current number of VMs), and we'd lose
>> the ability to move services between hardware quickly and easily. Oh,
>> and we'd have the management headache of dealing with a bunch more
>> hosting providers, as I doubt the current ones will give us that many
>> boxes.
>
> *cough*
>
> Yes I think they would.

Ya think? I'm struggling to see which of our 6 providers would pony up
more than a couple more machines.

> That being said :) I think its a mistake. It would be a complete waste of
> resources to go to dedicated machines. Some of the machines we have we
> hardly use at this point.

I think most are certainly used, but some require much fewer resources
than others.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [ADMIN] Database Conversion

> I have a new faculty member who has a large database that is in
> MySQL. We don't support MySQL so the database needs to be ported to
> PostgreSQL. Her GA, who know MySQL, says that he has a query that he
> will run that will put the data into postgres. I thought that the
> data would have to be output to a text file and then copied into
> postgres. I don't know MySQL. I've done a conversion from Oracle
> and this is how I did it. Is he correct that he can put the data
> into a postgres database by running a MySQL query? It doesn't sound
> possible to me.

I don't think mysql has anything that exports data into postgresql. Unless
he is talking about the likes of DTS/SSIS or perl DBI, or other tools. Or
the tables are simple and he thinks he can ingeniously craft queries and
run them through pipes eventually to psql. DDL will be more difficult.


Regards,
Ben

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [pgsql-www] wiki.postgresql.org is awfully slow this evening

Dave Page wrote:
> On Thu, Sep 4, 2008 at 3:08 PM, Greg Sabino Mullane <greg@turnstep.com> wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: RIPEMD160
>>
>>
>>> from remus which is in austria shows no problem for the last 24h) we had
>>> two runaway cvsps processes in the git jail(which is on the same
>>> physical host as the wiki) that more or less hogged all the CPU on the box.
>> Oh for Pete's sake, can we please get away from jails and just use
>> dedicated servers? We've had enough people volunteer hardware and
>> time to make this happen.
>
> No we haven't - not even remotely. At a rough count we'd need at least
> another 17 servers (based on the current number of VMs), and we'd lose
> the ability to move services between hardware quickly and easily. Oh,
> and we'd have the management headache of dealing with a bunch more
> hosting providers, as I doubt the current ones will give us that many
> boxes.

*cough*

Yes I think they would.

That being said :) I think its a mistake. It would be a complete waste
of resources to go to dedicated machines. Some of the machines we have
we hardly use at this point.

Sincerely,

Joshua D. Drake

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] [PATCH] Cleanup of GUC units code

On Thu, 2008-09-04 at 09:29 -0400, Andrew Sullivan wrote:
> On Thu, Sep 04, 2008 at 01:26:44AM +0300, Hannu Krosing wrote:
>
> > So Andrews opinion was that Mb (meaning Mbit) is different from MB (for
> > megabyte) and that if someone thinks that we define shared buffers in
> > megabits can get confused and order wrong kind of network card ?
>
> I know it's fun to point and laugh instead of giving an argument, but
> the above is not what I said. What I said is that there is a
> technical difference between at least some of these units, and one
> that is relevant in some contexts where we have good reason to believe
> Postgres is used. So it seems to me that there is at least a _prima
> facie_ reason in favour of making case-based decisions. Your argument
> against that appears to be, "Well, people can be sloppy."
>
> Alvaro's suggestion seems to me to be a better one.

Agreed. maybe this can even be implemented as a special switch to
postmaster (maybe -n or --dry-run, similar to make), not a separate
command.

> > I can understand Alvaros stance more readily - if we have irrational
> > constraints on what can go into conf file, and people wont listen to
> > reason
>
> Extending your current reasoning, it's irrational that all the names
> of the parameters have to be spelled correctly.

It would be irrational to allow all letters in parameter names to be
case-insensitive, except 'k' which has to be lowercase ;)

The main point of confusion comes from not accepting KB and this bites
you when you go down from MB, with reasoning like "ok, it seems that
units are in uppercase, so let's change 1MB to 768KB and see what
happens"

-------------
Hannu

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [patch] GUC source file and line number]

>>> Greg Smith <gsmith@gregsmith.com> wrote:

> name | Recommended | Current | Min | Default | Max
> -------------+-------------+---------+-------+---------+---------
> wal_buffers | 1024kB | 64kB | 32 kB | 64 kB | 2048 MB

Personally, I would take the "Min", "Default", and "Max" to mean what
Greg intends; it's the "Current" one that gives me pause. The current
value of this connection? The value that a new connection will
currently get? The value which new connections will get after a
reload with the current conf file? The value which new connections
will get after a restart with the current conf file? I can understand
how someone would take one of these four values to be what is meant by
"Default", though.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-www] wiki.postgresql.org is awfully slow this evening

On Thu, Sep 4, 2008 at 3:08 PM, Greg Sabino Mullane <greg@turnstep.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
>
>
>> from remus which is in austria shows no problem for the last 24h) we had
>> two runaway cvsps processes in the git jail(which is on the same
>> physical host as the wiki) that more or less hogged all the CPU on the box.
>
> Oh for Pete's sake, can we please get away from jails and just use
> dedicated servers? We've had enough people volunteer hardware and
> time to make this happen.

No we haven't - not even remotely. At a rough count we'd need at least
another 17 servers (based on the current number of VMs), and we'd lose
the ability to move services between hardware quickly and easily. Oh,
and we'd have the management headache of dealing with a bunch more
hosting providers, as I doubt the current ones will give us that many
boxes.


--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [pgsql-advocacy] famous multi-process architectures

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> "Google got inspired by Postgres: they use the same
> multi-process architecture for their browser as Postgres
> already features for many years. Simply because it
> provides better crash-safety than threaded applications."

That's a heck of a stretch to say they were "inspired" by
Postgres. A multi-process model is hardly a unique development
of Postgres, and it's not like we don't still have crash problems:

"process exited abnormally and possibly corrupted shared memory"
"terminating connection because of crash of another server process"

I suspect Chrome doesn't have the same shared memory requirements
that a database does, of course.

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200809041014
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAki/7dUACgkQvJuQZxSWSshiAACg6cYg8GkpoNmTIV1/edxEdB0p
AkUAn2HMQPntdqjQARWA4Z9pKef7aPwj
=mpK2
-----END PGP SIGNATURE-----

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [GENERAL] Changes for version 8.4

paul tilles wrote:
> Where can I find a list of changes for Version 8.4 of postgres?

It's not officially written anywhere. As a starting point you can look
here:
http://wiki.postgresql.org/wiki/Category:CommitFest
Then look at each Commitfest:2008:xx page, and see the list of committed
patches. Also, note that a certain number of patches have gone in
without being listed there (most notably, a huge improvement in how
EXISTS queries are handled).

The definitive place, of course, is the CVS logs.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] wiki.postgresql.org is awfully slow this evening

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> from remus which is in austria shows no problem for the last 24h) we had
> two runaway cvsps processes in the git jail(which is on the same
> physical host as the wiki) that more or less hogged all the CPU on the box.

Oh for Pete's sake, can we please get away from jails and just use
dedicated servers? We've had enough people volunteer hardware and
time to make this happen.

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200809041007
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAki/66gACgkQvJuQZxSWSsgFbACg38bPRwFMKFoixPg80QVtiVpO
sgsAnRu2S3lKJ3udfQTKl0gFujVKA/sD
=UT0M
-----END PGP SIGNATURE-----

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] [PATCH] Cleanup of GUC units code

On Sep 4, 2008, at 6:29 AM, Andrew Sullivan wrote:

> On Thu, Sep 04, 2008 at 01:26:44AM +0300, Hannu Krosing wrote:
>
>> So Andrews opinion was that Mb (meaning Mbit) is different from MB
>> (for
>> megabyte) and that if someone thinks that we define shared buffers in
>> megabits can get confused and order wrong kind of network card ?
>
> I know it's fun to point and laugh instead of giving an argument, but
> the above is not what I said. What I said is that there is a
> technical difference between at least some of these units, and one
> that is relevant in some contexts where we have good reason to believe
> Postgres is used. So it seems to me that there is at least a _prima
> facie_ reason in favour of making case-based decisions. Your argument
> against that appears to be, "Well, people can be sloppy."

Settings in postgresql.conf are currently case-insensitive. Except
for the units.

> Alvaro's suggestion seems to me to be a better one. It is customary,
> in servers with large complicated configuration systems, for the
> server to come with a tool that validates the configuration file
> before you try to load it. Postfix does this; apache does it; so does
> BIND. Heck, even NSD (which is way less configurable than BIND) does
> this. Offering such a tool provides considerable more benefit than
> the questionable one of allowing people to type whatever they want
> into the configuration file and suppose that the server will by magic
> know what they meant.

How would such a tool cope with, for example, shared_buffers
being set to one eighth the size the DBA intended, due to their
use of Mb rather than MB? Both of which are perfectly valid
units to use to set shared buffers, even though we only support
one right now. If the answer to that is something along the lines
of we don't support megaabits for shared_buffers, and never will because
nobody in their right mind would ever intend to use megabits
to set their shared buffer size... that's a useful datapoint when
it comes to designing for usability.

Cheers,
Steve


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[BUGS] BUG #4400: initdb doesn't work with partition D:

The following bug has been logged online:

Bug reference: 4400
Logged by: Jan-Peter Seifert
Email address: Jan-Peter.Seifert@gmx.de
PostgreSQL version: 8.3.3
Operating system: Windows xp Professional
Description: initdb doesn't work with partition D:
Details:

Hello,

whenever I try to run initdb on a directory on partition "D:" with the
parameter "-D" I get the error that a "file exists". I create a directory,
give full rights for the user postgres and then run initdb on it. On
partition E: it works. Both are NTFS. I have no programs open that might
access the directories ...

Strange.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [ADMIN] server/db encoding (mix) issues

Jan-Peter Seifert wrote:
> we have a mix of older software still using LATIN1 as db encoding and the psqlODBC-drivers (ANSI) and newer software using UTF8 as db encoding. As running two server instances would use up more resources(?) than just one we'd like to have all dbs in one cluster. Which cons against this solution are there? Which operating system locale should be used then? C locale is recommended in the docs - also because of better performance. However, the language of the software is not English but German - so shouldn't there be problems with sorting German Umlauts etc. correctly etc.? Which encoding should the server have - UTF8/Unicode or LATIN1? BTW which is the correct locale for LATIN1 and German (de_DE (my guess) or de_DE@euro (which seems to be for LATIN9)). Using SQL_ASCII doesn't seem to be a wise choice. Are there no problems when connecting with psqlODBC-ANSI drivers if the server encoding is UTF8/Unicode? I'd be happy if you could enlighten me a bit.

Set your locale to de_DE.utf8 and use UTF8 as server encoding.

I would be interested to know where the documentation "recommends" using
the C locale. That would certainly not be reasonable for many uses.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Debugging methods

Hi,

M2Y wrote:
> I am a beginner to Postgres and I am going through code. I would like
> to know the debugging methods used in development.

Try ./configure with '--enable-debug' and '--enable-cassert', as
outlined in the developer's FAQ [1], where you certainly find more
information as well. Then run the postmaster with '-A1 -d5'

Regards

Markus Wanner

[1]: http://wiki.postgresql.org/wiki/Developer_FAQ

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] pg_dump etc. versions

Jan-Peter Seifert wrote:
> I'm wondering if there's a compatibility list of the tools supplied with PostgreSQL - e.g. psql seems to be very server version specific (only major or also minor versions?).
> For pg_dump I'd say users should use the version of the target server if it's already installed, but is this also the case if the target server version is older? Am I completely wrong? Should I always use the pg_dump from the source server? When migrating from 8.1 to 8.2 I get several errors with commands regarding creating users and a lib that had been integrated into the core when restoring from the source server's pg_dump's dump. When using the target server's pg_dump for the dump I don't. But is then really everything okay? And pgAdmin comes with its own set of the PostgreSQL tools ...

I think the only thing that we really check is that pg_dump of a newer
version can dump databases from an older version server. All the other
tools probably only work (completely) with a server from the same major
release.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [ADMIN] Database Conversion

Στις Thursday 04 September 2008 16:24:34 ο/η Carol Walter έγραψε:
> Hello, All,
>
> I have a new faculty member who has a large database that is in
> MySQL. We don't support MySQL so the database needs to be ported to
> PostgreSQL. Her GA, who know MySQL, says that he has a query that he
> will run that will put the data into postgres. I thought that the
> data would have to be output to a text file and then copied into
> postgres. I don't know MySQL. I've done a conversion from Oracle
> and this is how I did it. Is he correct that he can put the data
> into a postgres database by running a MySQL query? It doesn't sound
> possible to me.

If his query is like:

SELECT 'INSERT INTO PostgreSqlTable(...) VALUES(''||somevalue...||'')' FROM mysqltable ....

then it is possible

>
> Carol
>

--
Achilleas Mantzios

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [ADMIN] Database Conversion

Στις Thursday 04 September 2008 16:24:34 ο/η Carol Walter έγραψε:
> Hello, All,
>
> I have a new faculty member who has a large database that is in
> MySQL. We don't support MySQL so the database needs to be ported to
> PostgreSQL. Her GA, who know MySQL, says that he has a query that he
> will run that will put the data into postgres. I thought that the
> data would have to be output to a text file and then copied into
> postgres. I don't know MySQL. I've done a conversion from Oracle
> and this is how I did it. Is he correct that he can put the data
> into a postgres database by running a MySQL query? It doesn't sound
> possible to me.
>

We recently did a conversion from MS Access (i dont know details) to pgsql 8.3.3.
The MS Access-aware guy just declared the correct postgresql ODBC settings,
i adjusted the pgsql backend to accept connections from the MS workstation,
then performed an EXPORT from MS Access to the pgsql datasource
and thats all.
Of course all i got was the exact MS Access tables, which then were useful
to populate my new designed pgsql tables.

One caveat here, most commonly, is the design of the DB.
The lower end you get mysql->sql server->access->COBOL, etc...
the greater chance you need a re-engineering of the schema.

> Carol
>

--
Achilleas Mantzios

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Conflict resolution in Multimaster replication(Postgres-R)

Hello Srinivas,

M2Y wrote:
> Markus: It looks like the hybrid approach used by Postgres-R(as
> described in that paper) is good.

Well, yeah. That's why am working on it ;-)

You are very welcome to download the patch and dig into the sources. See
www.postgres-r.org for more information.

To answer your original question in more details:

> Suppose there are two sites in the group, lets say, A and B and are
> managing a database D. Two transactions TA and TB started in sites A
> and B respectively, at nearly same time, wanted to update same row of
> a table in the database. As, no locking structures and other
> concurrency handling structures are replicated each will go ahead and
> do the modifications in their corresponding databases and sends the
> writeset.

Correct so far. Note that both transactions might have applied changes,
but they have not committed, yet.

In eager mode we rely on the Group Communication System to deliver these
two changesets [1] in the same order on both nodes. Let's say both
receive TA's changeset first, then TB's.

The backend which processed TA on node A can commit, because its changes
don't conflict with anything else. The changeset of TB is forwarded to a
helper backend, which tries to apply its changes. But the helper backend
detects the conflict against TA and aborts (because it knows TA takes
precedence on all other nodes as well).

On node B, the backend which processed TB has to wait with its commit,
because another changeset, namely TA's came in first. For that changeset
a helper backend is started as well, which applies the changes of TA.
During application of changes, that helper backend detects a conflict
against the (yet uncommitted) changes of TB. As it knows its transaction
TA takes precedence over TB (on all other nodes as well), it tells TB
to abort and continues applying its own changes.

I hope that was an understandable explanation.

Regards

Markus Wanner


[1]: In the original Postgres-R paper, these are called writesets. But
in my implementation, I've altered its meaning somewhat. Because of that
(and because I admittedly like "changeset" better), I've decided to call
them changesets now...

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

On Thu, Sep 04, 2008 at 01:26:44AM +0300, Hannu Krosing wrote:

> So Andrews opinion was that Mb (meaning Mbit) is different from MB (for
> megabyte) and that if someone thinks that we define shared buffers in
> megabits can get confused and order wrong kind of network card ?

I know it's fun to point and laugh instead of giving an argument, but
the above is not what I said. What I said is that there is a
technical difference between at least some of these units, and one
that is relevant in some contexts where we have good reason to believe
Postgres is used. So it seems to me that there is at least a _prima
facie_ reason in favour of making case-based decisions. Your argument
against that appears to be, "Well, people can be sloppy."

Alvaro's suggestion seems to me to be a better one. It is customary,
in servers with large complicated configuration systems, for the
server to come with a tool that validates the configuration file
before you try to load it. Postfix does this; apache does it; so does
BIND. Heck, even NSD (which is way less configurable than BIND) does
this. Offering such a tool provides considerable more benefit than
the questionable one of allowing people to type whatever they want
into the configuration file and suppose that the server will by magic
know what they meant.

> I can understand Alvaros stance more readily - if we have irrational
> constraints on what can go into conf file, and people wont listen to
> reason

Extending your current reasoning, it's irrational that all the names
of the parameters have to be spelled correctly. Why can't we just
accept log_statement_duration_min? It's _obvious_ that it's the same
thing as log_min_duration_statement! It's silly to expect that
harried administrators have to spell these options correctly. Why
can't we parse all the file, separating each label by "_". Then if
any arrangements of those labels matches a "real" configuration
parameter, select that one as the thing to match and proceed from
there?

A


--
Andrew Sullivan
ajs@commandprompt.com
+1 503 667 4564 x104
http://www.commandprompt.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] Database Conversion

Hello, All,

I have a new faculty member who has a large database that is in
MySQL. We don't support MySQL so the database needs to be ported to
PostgreSQL. Her GA, who know MySQL, says that he has a query that he
will run that will put the data into postgres. I thought that the
data would have to be output to a text file and then copied into
postgres. I don't know MySQL. I've done a conversion from Oracle
and this is how I did it. Is he correct that he can put the data
into a postgres database by running a MySQL query? It doesn't sound
possible to me.

Carol

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[HACKERS] Debugging methods

Hello,

I am a beginner to Postgres and I am going through code. I would like
to know the debugging methods used in development.

Some of my requirements are; for a given query, how parse structures
are created in pg_parse_query, how they are analyzed and rewritten in
pg_analyze_and_rewrite and how the final plan is created in
pg_plan_queries. I will go through code but I would like to know any
debugging methods available to understand what happens for a given
query.

I have searched in the net and I am unable to find them. Sorry if it
is available somewhere and I am asking again.

Thanks,
Srinivas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] xpath_bool_ns() and xml2

Hi,

 

does anybody know how to use the xml2 function xpath_bool with namespaces.

I have used this function successfully as long as my xml documents haven’t contained namespaces. I searched with google and found some readme file where the function xpath_bool_ns was available that would probably resolve my namespace aware xml xpath issue, but this function isn’t contained in the pqxml.dll that comes with postgres 8.3.3.

 

Can anybody help me or point some other solution ? I just want to do some xpath queries on a table column that returns a Boolean as result.

 

Cheers, Tobias

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Hannu Krosing escribió:
> On Wed, 2008-09-03 at 20:01 -0400, Alvaro Herrera wrote:

> > Yes there is --- it's the SI.
> >
> > http://en.wikipedia.org/wiki/SI#SI_writing_style
> >
> > I don't know about it being "evil" and punishment, but it's wrong.
>
> SI defines decimal-based prefixes, where k = kilo = 1000, so our current
> conf use is also wrong.

Actually, this has been a moving target. For a certain length of time,
some standards did accept that k meant 1024 "in computing context"; see

http://en.wikipedia.org/wiki/Binary_prefix

So we're not _absolutely_ wrong here; at least not until KiB are more
widely accepted and kB more widely refused to mean 1024 bytes. The
relevant standard has been published just this year by ISO.

http://en.wikipedia.org/wiki/ISO/IEC_80000#Binary_prefixes

So this is new territory, whereas case-sensitivity of prefixes and unit
abbreviations has existed for decades.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] Changes for version 8.4

Where can I find a list of changes for Version 8.4 of postgres?

Paul Tilles

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general