Friday, July 25, 2008

Re: [GENERAL] Strange Postgresql behavior solved

On Saturday 26. July 2008, Owen Hartnett wrote:
>Probably some funky stuff with the router (not one of their expensive
>ones) that caused all the consternation, but I originally thought
>corrupt database (because I could get 117 records to come out fine,
>but not the 118th). Also, I had narrowed it down to failing only
>when accessing the last three fields of that 118th record, the first
>40 fields were fine.

That sounds a lot like the "game mode" router bug:

http://www.azureuswiki.com/index.php/Torrents_stop_at_99_percent
--
Leif Biberg Kristensen | Registered Linux User #338009
Me And My Database: http://solumslekt.org/blog/
My Jazz Jukebox: http://www.last.fm/user/leifbk/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [SQL] Test if a table is present

On Sat, 26 Jul 2008, "Ole Ekerhovd" <olehare@online.no> writes:
> How can I test if a table is present in database?

SELECT TRUE
FROM information_schema.tables
WHERE table_name = '<TABLE>' AND
table_schema = '<SCHEMA>'

BTW, this query is portable across database systems supporting
information schema, which is an almost defacto standard.


Regards.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Research/Implementation of Nested Loop Join optimization

Ron Mayer <rm_pg@cheapcomplexdevices.com> writes:
> Tom Lane wrote:
>> A backwards scan will get no such overlapping and thus be up to 2X
>> slower, unless the kernel is smart enough to do read-ahead for
>> descending-order read requests. Which seems not too probable.

> Linux's old adaptive readahead patches claimed to[1]:

I didn't say that there were *no* platforms that could do it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] ayuda conectarse a diferentes Bd en postgres

2008/7/25 andrea lisseth fuentes meneses <andri04_7@hotmail.com>:
> Hola a todos:
>
> necesito una ayuda urgente:
>
> soy nueva en el motor de base de datos de postgres y necesito realizar
> unaconsulta que pueda traer tablas de diferentes base de datos. utilizo algo
> asi
> select * from dbo.desercio.edad, pero me aparece el siguiente error:
>
> ERROR: no están implementadas las referencias entre bases de datos:
> «dbo.desercion.edad»
>

no se puede de forma nativa en postgres... de hecho no tiene mucho
sentido, para eso son los esquemas...
si realmente quieres hacer eso puedes complicarte con dblink o probar plproxy

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 5: ¿Has leído nuestro extenso FAQ?
http://www.postgresql.org/docs/faqs.FAQ.html

Re: [SQL] Test if a table is present



you can find it throuh a query like this


select tablename from pg_tables where tablename = 'tbl_updated_status_master';

regards:

Anoop

Re: [HACKERS] Research/Implementation of Nested Loop Join optimization

Tom Lane wrote:
> Gregory Stark <stark@enterprisedb.com> writes:
>> "Manoel Henrique" <mhenriquesgbd@gmail.com> writes:
>>> Yes, I'm relying on the assumption that backwards scan has the same cost as
>>> forward scan, why shouldn't it?
>
> G...we expect that forward scans will result
> in the kernel doing read-ahead, ...
> A backwards scan will get no such overlapping and thus be up to 2X
> slower, unless the kernel is smart enough to do read-ahead for
> descending-order read requests. Which seems not too probable.

Linux's old adaptive readahead patches claimed to[1]:
It also have methods to detect some less common cases:
- reading backward"
Interestingly the author of that patch used postgres as the example
application that benefits from the patch (30%).

I'm not sure if the backward reading feature got kept
in the simplified on-demand readahead that seems to have
superseded the adaptive readahead stuff in 2.6.23[2].

[1] http://lwn.net/Articles/185469/
[2] http://kernelnewbies.org/Linux_2_6_23#head-102af265937262a7a21766ae58fddc1a29a5d8d7


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[SQL] Test if a table is present

How can I test if a table is present in database?
 
Regards,
Ole

Re: [GENERAL] Strange Postgresql behavior solved

Owen Hartnett <owen@clipboardinc.com> writes:
> I spent a day on this, and it's really not a PostgreSQL issue, but I
> thought I'd post it in case someone else comes down with it.

> Scenario:

> I moved the physical location and networking environment of the
> server. It's on Mac OS X - XServe, but that isn't germaine to the
> story. Originally, the server was the DHCP router for the network,
> now it sits in a demilitarized zone off a DLink router that's
> providing DHCP and NAT.

> Symptoms:

> Postgres was unable to resolve *some* simple queries, like "Select *
> from salestable where thekey = 118", although it would work for
> thekey values of 1 all the way to 117. The connection would just
> freeze, and timeout after a couple of minutes.

> My application worked this way, and so did pgAdmin, but Navicat LE didn't!

> Solution:

> I finally realized that my application and pgAdmin were both
> accessing the server using the domain name, and Navicat was using the
> IP number. Indeed, replacing the connection data with the IP number
> on the app and pgAdmin made the world safe again.

What this sounds like to me is that you've got two postmasters running
on different ports, or something close to that. The specific behavior
you describe is absolutely not sensible.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Strange Postgresql behavior solved

I spent a day on this, and it's really not a PostgreSQL issue, but I
thought I'd post it in case someone else comes down with it.

Scenario:

I moved the physical location and networking environment of the
server. It's on Mac OS X - XServe, but that isn't germaine to the
story. Originally, the server was the DHCP router for the network,
now it sits in a demilitarized zone off a DLink router that's
providing DHCP and NAT.

Symptoms:

Postgres was unable to resolve *some* simple queries, like "Select *
from salestable where thekey = 118", although it would work for
thekey values of 1 all the way to 117. The connection would just
freeze, and timeout after a couple of minutes.

My application worked this way, and so did pgAdmin, but Navicat LE didn't!

Solution:

I finally realized that my application and pgAdmin were both
accessing the server using the domain name, and Navicat was using the
IP number. Indeed, replacing the connection data with the IP number
on the app and pgAdmin made the world safe again.

Probably some funky stuff with the router (not one of their expensive
ones) that caused all the consternation, but I originally thought
corrupt database (because I could get 117 records to come out fine,
but not the 118th). Also, I had narrowed it down to failing only
when accessing the last three fields of that 118th record, the first
40 fields were fine.

-Owen

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] [RFC] Unsigned integer support.

"Ryan Bradetich" <rbradetich@gmail.com> writes:
> ... I did have the following
> concern looking through src/backend/utils/adt/int8.c: There is code that is
> optionally compiled based on the INT64_IS_BUSTED pre-processor define.
> Is this pre-processor define something I should worry about for portability
> with this plan?

I wouldn't worry, really ;-). Five or more years ago, it seemed
important for PG to work on machines without functional int64 support,
but there is little if any evidence that anyone is using current PG
releases on such platforms. I might well be the last active PG hacker
who gives a damn about that case at all, and even I long ago stopped
expecting anything beyond core functionality to work on such a machine.
Since your proposed unsigned types certainly aren't core functionality,
I see no reason that they should need to work on INT64_IS_BUSTED
platforms.

> After I get uint types implemented, for fun I might try some benchmarks
> to see if I can detect the int8 overhead on a 32-bit system.

Right, you need to check that before drinking the kool-aid ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [RFC] Unsigned integer support.

Hello Dann,

On Fri, Jul 25, 2008 at 1:06 PM, Dann Corbit <DCorbit@connx.com> wrote:

> At the cost of one bit of storage, you have compatible types using

Thanks for your review and feedback! Unfortunately, I do need the full range
of the unsigned types for the project I am looking at. The reason I started
working on these types is because it seemed wasteful to use the next size
larger signed integer for the storage type of the unsigned integer.

Thanks for the suggestion!

- Ryan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [RFC] Unsigned integer support.

Tom,

On Fri, Jul 25, 2008 at 12:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Consider the idea of not having any uint4-specific arithmetic operators,
> but instead providing the following:
>
> * assignment casts from int4 and int8 to uint4
> (these throw error if out of range, of course)
> * implicit cast from uint4 to int8 (can never fail)
>
> The effect of providing the latter cast would be that any arithmetic
> involving a uint4 column would automatically be done in int8. Which
> would make it a shade slower than a native implementation, but probably
> not enough slower to be a problem --- and you'd avoid having to write
> dozens of operators and underlying support functions. Storing into the
> uint4 column would work fine with no extra notation because of the
> assignment casts.

This is an interesting idea that I will test out tonight. I did have
the following
concern looking through src/backend/utils/adt/int8.c: There is code that is
optionally compiled based on the INT64_IS_BUSTED pre-processor define.
Is this pre-processor define something I should worry about for portability
with this plan?

After I get uint types implemented, for fun I might try some benchmarks
to see if I can detect the int8 overhead on a 32-bit system.

> Moreover, you'd avoid cluttering the system with a pile of cross-type
> operators, which we have recently realized are not a good thing, because
> they increase the likelihood of "ambiguous operator" problems --- see
> http://archives.postgresql.org/pgsql-hackers/2008-06/msg00750.php

Good to know. Thanks for the link.

> For uint8 you'd have to promote to numeric to guarantee no failure
> in the implicit cast; which is going to be a rather bigger performance
> hit, but I don't really see uint8 as being a type with huge demand.

Hopefully I will not need the uint8 type. Right now for a project I am
looking at I need the uint2 and uint4 types. uint8 support can come
later if it is needed or requested.

> Now you probably *will* want cross-type comparison operators, if you
> are going to support indexing of unsigned columns, so that something
> like
> uint4col > 42
> can be indexed without any casting. But limiting yourself to the six
> basic comparison operators certainly makes it a much less bulky project.

This sounds excellent! Hopefully by using these operators I will be able to
avoid most of the casting to int8 for my use, while still providing the
complete functionality for this type.

Thanks again for your review and feedback!

- Ryan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] [HACKERS] WITH RECUSIVE patches 0723

> Thanks for the patch :)
>
> Now, I get a different problem, this time with the following code
> intended to materialize paths on the fly and summarize down to a
> certain depth in a tree:
>
> CREATE TABLE tree(
> id INTEGER PRIMARY KEY,
> parent_id INTEGER REFERENCES tree(id)
> );
>
> INSERT INTO tree
> VALUES (1, NULL), (2, 1), (3,1), (4,2), (5,2), (6,2), (7,3), (8,3),
> (9,4), (10,4), (11,7), (12,7), (13,7), (14, 9), (15,11), (16,11);
>
> WITH RECURSIVE t(id, path) AS (
> VALUES(1,ARRAY[NULL::integer])
> UNION ALL
> SELECT tree.id, t.path || tree.id
> FROM tree JOIN t ON (tree.parent_id = t.id)
> )
> SELECT
> t1.id, count(t2.*)
> FROM
> t t1
> JOIN
> t t2
> ON (
> t1.path[1:2] = t2.path[1:2]
> AND
> array_upper(t1.path,1) = 2
> AND
> array_upper(t2.path,1) > 2
> )
> GROUP BY t1.id;
> ERROR: unrecognized node type: 203

Thanks for the report. Here is the new patches from Yoshiyuki against
CVS HEAD. Also I have added your test case to the regression test.

> Please apply the attached patch to help out with tab
> completion in psql.

Thanks. Your patches has been included.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

Re: [GENERAL] Sequence

> select * from guests;
> user_id | guest_id | name
> ---------+----------+------------
> 1 | 1 | Mark Twain
> 1 | 2 | Anna Black
> 2 | 3 | John Black
> (3 rows)
>
> I want to have a different result of insert command:
>
> user_id | guest_id | name
> ---------+----------+------------
> 1 | 1 | Mark Twain
> 1 | 2 | Anna Black
> 2 | 1 | John Black
>
> Sequence guests_guest_id_seq is not connected with the field user_id. It
> increases in spite of values of user_id. How can I solve this problem?

If possible, design your application to be happy with the way it is
already. Those keys shouldn't really be user visible anyway.

If you really have to have per-user guest IDs (and, I'm guessing,
contiguous sequences of guest IDs) you'll have to do a fair bit of work.
The usual approach seems to be using a trigger function to trap inserts
and deletes and rewrite the guest_id field appropriately.

If you don't need contiguous guest IDs - ie you're happy with a sequence
like "1 3 4 5 8" after IDs 2, 6 and 7 have been DELETEd - then you can
emulate a sequence with a per-user counter. Eg:

CREATE TABLE user (
user_id SERIAL PRIMARY KEY,
guest_id_ctr INTEGER
);

CREATE TABLE guest (
user_id INTEGER,
guest_id INTEGER,
PRIMARY KEY(user_id, guest_id)
);

then do inserts into guest with a sequence of operations like this
(assuming the user_id of interest is "111"):

UPDATE user
SET guest_id_ctr = guest_id_ctr + 1
WHERE user_id = 111
RETURNING guest_id_ctr;

-- Now, using the value obtained with the previous statement, say "4":

INSERT INTO guest (user_id, guest_id)
VALUES (111, 4);

That assumes you're using a version of PostgreSQL new enough to support
UPDATE ... RETURNING. If not, you need to use SELECT FOR UPDATE to
obtain the value, followed by a separate UPDATE statement to actually
increment it.

If you do need contiguous values of guest_ids within a given user_id
then you'll need to use a different approach, most likely PL/PgSQL
triggers or the use of function wrappers for DML operations on the
table. However, in most cases an application requirement of contiguous
IDs is a design fault that should be fixed, rather than hacked around in
the database.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PATCHES] pg_dump additional options for performance

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


Tom Lane wrote:
> * --schema-before-data, --data-only, and --schema-after-data can be

I thought you were arguing for some better names at one point? Those seem
very confusing to me, especially "--schema-after-data". I know it means
"the parts of the schema that come after the data" but it could
also be read as other ways, including "put the schema after the data" - which
makes no sense, but the name is not exactly intuitive either. "Pre" and "Post"
at least are slightly better, IMO. How about --pre-data-schema
and --post-data-schema? Or --pre-data-section and --post-data-section?
Or (my favorites) --pre-data-commands and --post-data-commands? As the
existing docs say, "commands" are what we are generating.

> them as data, because the objects they are attached to are data. I kind
> of like the latter approach because it would create an invariant that
> comments appear in the same dump section as the object commented on.
> Thoughts?)

+1 on putting them next to the object commented on.

> And there's yet another issue here, which is that it's not entirely clear
> that the type of an object uniquely determines whether it's before or
> after data.

Wouldn't that be a problem with current dumps as well then?

> We could solve that problem by inserting a "dummy data" TOC entry where
> the data would have appeared, but this will only work in new archive
> files. With an implementation like this, pg_restore with
> --schema-before-data or --schema-after-data won't behave very nicely on a
> pre-8.4 --schema-only archive file. (Presumably it would act as though
> all the objects were "before" data.) Is that a small enough corner case
> to live with in order to gain implementation simplicity and robustness?

I'm not comfortable with corner cases for pg_restore backwards compatibility.
What exactly would happen (worse case) in that scenario?


- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200807252235
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkiKjgMACgkQvJuQZxSWSsiRMACg7c/VDo9hTTjukkFFvLYI31mL
BqkAn3FfepllvVnIwX+efA5cLPlVbDd0
=V/Sv
-----END PGP SIGNATURE-----

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Review: DTrace probes (merged version) ver_03

Zdenek Kotala wrote:
> I performed review and I prepared own patch which contains only probes
> without any issue. I suggest commit this patch because the rest of patch
> is independent and it can be committed next commit fest after rework.
>
> I found following issues:

I noticed that CLOG, Subtrans and Multixact probes are added during a
regular Checkpoint, but not during a shutdown flush. I think the probes
should count that too (probably with the same counter).

In the pgstat_report_activity probe, is it good to call the probe before
taking the fast path out?

In the BUFFER_READ_START probe, we do not include the smgrnblocks()
call, which could be significant since it includes a number of system
calls.

I think BUFFER_HIT and BUFFER_MISS should include the "isLocalBuf" flag.
I also wonder whether BUFFER_HIT should be called in the block above,
lines 220-238, where we check the "found" flag, i.e.

if (isLocalBuf)
{
ReadLocalBufferCount++;
bufHdr = LocalBufferAlloc(smgr, blockNum, &found);
if (found)
{
LocalBufferHitCount++;
TRACE_POSTGRESQL_BUFFER_HIT(true); /* local buffer */
}
else
{
TRACE_POSTGRESQL_BUFFER_MISS(true); /* ditto */
}
}
else
{
ReadBufferCount++;

/*
* lookup the buffer. IO_IN_PROGRESS is set if the requested block is
* not currently in memory.
*/
bufHdr = BufferAlloc(smgr, blockNum, strategy, &found);
if (found)
{
BufferHitCount++;
TRACE_POSTGRESQL_BUFFER_HIT(false); /* not local */
}
else
{
TRACE_POSTGRESQL_BUFFER_MISS(false); /* ditto */
}
}

(note that this changes the semantics w.r.t. the isExtend flag).


I understand the desire to have DEADLOCK_FOUND, but is there really a
point in having a DEADLOCK_NOTFOUND probe? Since this code runs every
time someone waits for a lock longer than a second, there would be a lot
of useless counts and nothing useful.

I find it bogus that we include query rewriting in QUERY_PARSE_START/DONE.
I think query rewriting should be a separate probe.

QUERY_PLAN_START is badly placed -- it should be after the check for
utility commands (alternatively there could be a QUERY_PLAN_DONE in the
fast way out for utility commands, but in that case a "is utility" flag
would be needed. I don't see that there's any point in tracing planning
of utility commands though).

Why are there no probes for the v3 protocol stuff? There should
be probes for Parse, Bind, Execute message processing too, for
completeness. Also, I wonder if these probes should be in the for(;;)
loop in PostgresMain() instead of sprinkled in the other routines.
I note that the probes in PortalRun and PortalRunMulti are schizophrenic
about whether they include utility functions or not.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] php + postgresql

On Sat, 2008-07-26 at 11:13 +0930, admin wrote:

> Anyway, while I'm quite happy to continue banging out things that "just
> work" in PHP for the time being, you suggest (in a subsequent post) that
> there is one scripting language in particular that you'd use ... might I
> enquire which language that is, and why? Just curious, I'm definitely
> not looking for an ideological debate.

You do realize that you just opened one of the longest, loudest and most
inherently beer inducing arguments known to man since Emacs vs Vi?
(answer: Joe) So why not! I use Python. I love Python. Although I
guarantee you that others will say ruby, perl, java (well maybe not
java).

The answer to your question is:

Use what works for you.

I used PHP for years, I actually used Perl before PHP but got tired of
the Perl oddness. I moved on to Python and love it. There are things in
it I don't like (just see subprocess) but for the most part, its
gorgeous.

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] php + postgresql

> Well no PHP is conceptual undisciplined and confusing. I would
> not compare this with Postgresql itself which is very professional
> developed with a great vision. PHP is just and always was a hack.

I didn't mean to compare PG and PHP at the level of engineering quality,
but to suggest that perhaps both suffer from people continuing to hold
rigid preconceptions about them based on how things were 5 or 10 years ago.

Anyway, while I'm quite happy to continue banging out things that "just
work" in PHP for the time being, you suggest (in a subsequent post) that
there is one scripting language in particular that you'd use ... might I
enquire which language that is, and why? Just curious, I'm definitely
not looking for an ideological debate.

Re the possible heightened level of animosity to PHP in PG circles, if
it exists, could it have anything to do with PHP's close association
with MySql? The animosity, by the way, seems to go both ways, I think I
saw something about Rasmus Lerdorf bagging PostgreSQL on Slashdot(?)
recently. Personally, I'm not overly concerned either way. I'm happy to
leave the academic debates to those with the time to pursue them.

I'm the first to admit I know little about the art and science of
relational database design and admin. But up to this point, I haven't
needed to. It doesn't take rocket science to store and retrieve some
text for a few web pages in a database.

Anyway, this is proving an interesting, lively and helpful community,
hope to learn lots more about doing things the PostgreSQL way ... with
PHP :-).

Mick

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] A couple of newbie questions ...

On Jul 23, 2008, at 12:00 PM, Shane Ambler wrote:

> To be honest I hadn't seen the use of INSERT INTO table (fld_x,
> fld_y,fld_z) VALUES (DEFAULT, 'y','z') before, I have always gone
> with INSERT INTO table (fld_x, fld_y,fld_z) VALUES (NULL, 'y','z')
>
> is DEFAULT a better option than using NULL? or is it just a
> preference to spell out the implied default entry?
>
> I've only used DEFAULT in CREATE TABLE(...)

The semantics of using DEFAULT or NULL is completely different. If the
column has a default value (for example, 0), then including DEFAULT in
your insert list will give you the default value of zero. But if you
include NULL in your insert list -- you'll get NULL, not zero. If
nothing is included for the column in your insert list, you'll get the
column default if it has one, otherwise NULL.


John DeSoi, Ph.D.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Sequence

Hello.

Each user has a lot of guests. Each guest only has one user.

1. I create a table users:

CREATE TABLE users (
user_id SERIAL PRIMARY KEY,
name varchar(256)
);

2. I create a table guests:

CREATE TABLE guests (
user_id integer,
guest_id SERIAL,
PRIMARY KEY (user_id, guest_id),
FOREIGN KEY (user_id) REFERENCES users(user_id) ON DELETE CASCADE
)

3. I add two new users:

insert into users (name) values ('alex2008');
insert into users (name) values ('jack2008');

select * from users;
user_id | name
---------+----------
1 | alex2008
2 | jack2008
(2 rows)

4. I add two new guests to the user alex2008 and one new guest to the
user jack2008:

insert into guests (user_id, name) values (1, 'Mark Twain');
insert into guests (user_id, name) values (1, 'Anna Black');
insert into guests (user_id, name) values (2, 'John Black');

select * from guests;
user_id | guest_id | name
---------+----------+------------
1 | 1 | Mark Twain
1 | 2 | Anna Black
2 | 3 | John Black
(3 rows)

I want to have a different result of insert command:

user_id | guest_id | name
---------+----------+------------
1 | 1 | Mark Twain
1 | 2 | Anna Black
2 | 1 | John Black

Sequence guests_guest_id_seq is not connected with the field user_id. It
increases in spite of values of user_id. How can I solve this problem?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Research/Implementation of Nested Loop Join optimization

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Tom Lane escribió:
>> Good joke, but to be serious: we expect that forward scans will result
>> in the kernel doing read-ahead, which will allow overlapping of
>> CPU work to process one page with the I/O to bring in the next page.

> I wonder if this is spoiled (or rather, the backwards case fixed) by the
> attempts to call posix_fadvise() on certain types of scan.

Yeah, I started wondering about that too after sending off the above.
The fadvise patch might eliminate the distinction ... on platforms where
fadvise exists and actually works well.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Research/Implementation of Nested Loop Join optimization

Tom Lane escribió:
> Gregory Stark <stark@enterprisedb.com> writes:
> > "Manoel Henrique" <mhenriquesgbd@gmail.com> writes:
> >> Yes, I'm relying on the assumption that backwards scan has the same cost as
> >> forward scan, why shouldn't it?
>
> > Because hard drives only spin one direction
>
> Good joke, but to be serious: we expect that forward scans will result
> in the kernel doing read-ahead, which will allow overlapping of
> CPU work to process one page with the I/O to bring in the next page.

I wonder if this is spoiled (or rather, the backwards case fixed) by the
attempts to call posix_fadvise() on certain types of scan.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Adding WHERE clause to pg_dump

On Fri, Jul 25, 2008 at 11:17:20PM +0100, Gregory Stark wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
>
> > How do we deal with this?
> >
> > pg_dump -w "last_update_timestamp < ..." -t 'table*'
> >
> > What I see is a recipe for inconsistent, un-restorable backups without a
> > user realizing what they have done. The only way to deal with the above
> > is:
> >
> > 1. Wildcards aren't allowed if you have -w
> > 2. You dump everything, if the WHERE clause isn't relevant you just dump
> > the whole table
>
> There's always
>
> 3. Apply the WHERE clause to all tables and if there's a table missing
> columns referenced in the where clause then fail with the appropriate
> error.
>
> Which seems like the right option to me. The tricky bit would be how to deal
> with cases where you want a different where clause for different tables. But
> even if it doesn't handle all cases that doesn't mean a partial solution is
> unreasonable.

Actually, Davy's patch does deal with the case "where you want a different
where clause for different tables".

-dg


--
David Gould daveg@sonic.net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] PERSISTANT PREPARE (another point of view)

1. like the idea because right now I'm having to track which prepared statement (sets) are in which session. using xcache to track this, but little bit dangerous. could query the database first but the whole idea is to take a load off the db.

2. perhaps "global" could mean simply that the definition is global - if called for session and not exist in session, then session prepares it first from the global def. there would need to be a back reference in case the global def was changed or dropped.

3. don't understand your statement about how prepared statements can be used as subqueries - you mean something like

select .. from tab1 t1 where t1.col1 in( exec prep1(..) )

or exactly what?

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Research/Implementation of Nested Loop Join optimization

Gregory Stark <stark@enterprisedb.com> writes:
> "Manoel Henrique" <mhenriquesgbd@gmail.com> writes:
>> Yes, I'm relying on the assumption that backwards scan has the same cost as
>> forward scan, why shouldn't it?

> Because hard drives only spin one direction

Good joke, but to be serious: we expect that forward scans will result
in the kernel doing read-ahead, which will allow overlapping of
CPU work to process one page with the I/O to bring in the next page.
A backwards scan will get no such overlapping and thus be up to 2X
slower, unless the kernel is smart enough to do read-ahead for
descending-order read requests. Which seems not too probable. A fairly
typical kernel behavior is that read-ahead is triggered by successive
read() requests without any intervening seek(), and this is impossible
for a backward scan.

(Yes, we do optimize out the seek calls in a forward scan. IIRC it's
done in fd.c.)

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Research/Implementation of Nested Loop Join optimization

Joshua D. Drake escribió:
> On Fri, 2008-07-25 at 19:31 -0400, Jonah H. Harris wrote:
> > On Fri, Jul 25, 2008 at 6:10 PM, Gregory Stark <stark@enterprisedb.com> wrote:
> > > "Manoel Henrique" <mhenriquesgbd@gmail.com> writes:
> > >
> > >> Yes, I'm relying on the assumption that backwards scan has the same cost as
> > >> forward scan, why shouldn't it?
> > >
> > > Because hard drives only spin one direction
> >
> > :)
>
> What if you are below the equator?

They spin the same direction here too, thanks :-) (Coriolis does not
affect much in this case)

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[BUGS] Error while loading shared libraries

Hello,
I have installed postgresql, version 8.3.3. and when I run: su -l postgres  -c "/usr/local/pgsql/bin/createuser  -d -a nobody", I get this error:

sh-2.05b# su -l postgres  -c "/usr/local/pgsql/bin/createuser  -d -a nobody"
/usr/local/pgsql/bin/createuser: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory

However, this library is there..:
sh-2.05b# su -l postgres  -c "echo $LD_LIBRARY_PATH"                       
/lib:/usr/lib:/opt/IBM-ME-2.2.2/jre/bin:/usr/local/pgsql/lib

sh-2.05b# su -l postgres  -c "ls -ls /usr/lib/libz.*"
 0 lrwxrwxrwx  1 root root    24 Jul 25 13:46 /usr/lib/libz.so -> /usr/lib/libz.so.1.2.2.2
 0 lrwxrwxrwx  1 root root    24 Jul 25 13:46 /usr/lib/libz.so.1 -> /usr/lib/libz.so.1.2.2.2
76 -rwxr-xr-x  1 root root 75568 Jul 25 13:45 /usr/lib/libz.so.1.2.2.2
sh-2.05b#

So I was wondering if you guys could help me! I do not know what is wrong :-(
Is createuser not looking at LD_LIBRARY_PATH for its libraries?
Thanks,
--
- Yuly

[BUGS] BUG #4324: Default value for a column is not returned in select when column has not been explicitly set

The following bug has been logged online:

Bug reference: 4324
Logged by: Jeff Galyan
Email address: jeff@richrelevance.com
PostgreSQL version: 8.3.3
Operating system: Linux
Description: Default value for a column is not returned in select
when column has not been explicitly set
Details:

When a column does not have a value explicitly set, v8.3.3 is not returning
the default value for the column, as 8.2.3 used to (per section 11.5 of the
SQL specification). The purpose of setting a default value for a column is
so a value will be returned if the column has not been explicitly set. If a
nullable column has no value but does have a default, the specification
requires that the default value be returned. If the column's value has been
explicitly set, then the value in the column must be returned. Further,
when a default is specified in the column descriptor, INSERTs which omit
setting a value for the column should automatically insert the default value
into the column. Again, the behavior in 8.2 conformed with the SQL
specification, section 11.5. 8.3 is not behaving per the spec.

Example:
Take an existing table with some data in it and add a nullable column of
type boolean with default value true. In 8.2, 'select bool_column from
my_table' would have returned 'true' for all rows where the column had not
been explicitly set (which should be all of them at this point). Subsequent
inserts would have the value automatically set to 'true' if no value was
specified, or whatever value is explicitly specified. In 8.3, this case
will return NULL for all rows where the value has not been explicitly
specified. Per sec. 11.5 of the SQL spec, the behavior of v8.2 is correct.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

On Friday 25 July 2008, Zoltan Boszormenyi wrote:

> is there anyone using PostgreSQL on FreeBSD 7.0 starting from scratch?

Every day.

> I compiled 8.3.3 and wanted to run initdb in my home directory but
> it fails with the error below.

How did you install PostgreSQL?
--
Kirk Strauser
Daycos

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-jobs] Job Opportunity (Contract 2 Hire) San Francisco Bay Area

Location: San Francisco Bay Area
Pay: NEG
Benefits: Yes (Health, Dental, Vision, 401k)
 
Looking for a Developer/DBA who can support a Perl/Postgres based financial billing system.  Advanced Perl scripting, database design, and strong SQL skills are required.  The position requires the ability to work and communicate effectively with other professionals in Information Systems, Finance and Legal disciplines.   Previous financial system experience is a plus.
 
Requirements:
Perl: 10 years
SQL:  5 years
Postgres (or similar database experience): 5 years
Linux/Unix Scripting: 3 years
Blake Haggerty
Permanent Placement Specialist
Work: 415-788-8488 x6062
Fax: 415-788-2592
Email: blake.haggerty@sapphire.com
http://www.linkedin.com/in/blakehaggerty
Sapphire Technologies

See who we know in common  

[GENERAL] Problem running script

My requirement is
1) to write a query in a file, read the input from that file, run in postgresql and write the output(query results) into a file. I'm not able to use combination of \i and \o
Let' say I have file test.sql and in test.sql I write
select * from abc;
 Now I need to read the query from the file and write the output to a file. I tried
=>\i test.sql \o test
but that is not working. I want to write a script which reads query from file and write output in a file and I want to do everything in one script and run it background at a particular time of teh day.
2) How do I run scripts in postgresql? How do I set the shell and execute the script?
 
Regards,
Sushma

Re: [SQL] postgres time zone settings(time difference in Server and client)


Hai all,

   I have  database running on server. I am using python to run my application.postgres client is running on the clients .All clients are connected to a single database running on server. Each time application starts ,the login time is taken from server  using the query
"""select CAST ( timeofday() AS timestamp"""

But the problem the time which I got is different from that of server machine . I Think the postgres client is adding some values to the server time. How can I solve this .Is there any configuration setting for client to solve this?.In which location client's configuration file is stored( I am using Linux(Debian )?.

here the result I got:

The correct time in server:   2008-07-25 14:16:54

But What i got from client (with database cursor of sever database): 
   2008-07-25 19:46:33.46  (difference of more than 5 Hr ) Where this difference come?

any one know the answer pls help me.

Thanks in advance

Anoop



Re: [HACKERS] Uncopied parameters on CREATE TABLE LIKE

Simon Riggs <simon@2ndquadrant.com> wrote:

> On Thu, 2008-07-24 at 11:41 -0400, Tom Lane wrote:
>
> > Now, if you're suggesting we need a plugin hook somewhere in or around
> > default_reloptions, that's possibly reasonable; but a GUC like you're
> > suggesting seems quite pointless.
>
> OK, I'll have a look, or perhaps Itagaki?

Yes, I agree, too. But my proposal is independent from such hooks :-)
I just suggested to copy reloptions as-is on CREATE TABLE LIKE.

I guess the first applicaitons using the extended reloptions are
user defined access methods for GiST and GIN. If those access methods
can receive reloptions, they can support their own parameters.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Additional psql requirements

On Fri, 2008-07-25 at 09:40 +0100, Dave Page wrote:
> On Fri, Jul 25, 2008 at 9:36 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
> > 2008/7/25 Dave Page <dpage@pgadmin.org>:
> >> On Fri, Jul 25, 2008 at 8:52 AM, daveg <daveg@sonic.net> wrote:
> >>
> >>> It seems to me that a sql-like client side scripting language should be as
> >>> similar as possible to plpgsql. Pgscript looks a bit like plpgsql, but is
> >>> pretty much incompatible with it for no particularly obvious reason.
> >>
> >> pgScript originally used a c-like syntax when it was pgUnitTest iirc.
> >> The new version is designed to be familiar to users of T-SQL.
> >
> > it is little bit unhappy - it's like T-SQL, but it isn't T-SQL - and
> > it's far to plpgsql
>
> I see no point in replicating pl/pgsql. Better to implement anonymous
> blocks in the server for that.

Agreed. My suggestion was for something much simpler than either.
Complex logic can be done in functions.

I just wanted an easy way to write install scripts that work on various
releases/schemas/environments, works on core and on any platform.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] I often have to do "update if exist, else insert", is my database design wrong?

Hi. This is just some thoughts about database design.
I often find my self having to do this

update table_XY set x=..., y=... where x=... AND y=....;
if not found then
insert into table_XY (x,y) values (...,...);
end if;

Is this normal or are there something else I could do so I don't have
to check if it exists?

Or is there some more general problem with the table design?
table_XY is in this case (and most cases) a table like this

create table table_XY (
x int references table_X,
y int references table_Y
);

I could of course add a constraint unique(x,y) to avoid duplicates,
but that would not change the the need to check if it exists before
inserting.
I could also do
delete from table_XY where x=... and y=...
insert into table_XY (x,y) values (...,...);
But that would seem to be very slow.

One idea is to put in dummy records for each x,y combination, as a
default value. But if table_XY has very few records, it seems like a
complete vaste of resources.
If I really wanted table_XY to contain a complete set of records of
all possible pairs of x,y values, how would I do that? The only way I
can think of is that when inserting into table_X, I'd do

insert into table_XY VALUES SELECT xvalue,table_Y.id FROM table_Y;

where table_Y contains a primary key called id.

How would you (you who knows this stuff far better than me) do this? :-)

For the developers: a combined insert/update command would be nice :-)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[HACKERS] Whence cometh the data in src/test/regress/data/streets.data ?

OK, I may be in a nitpicking mood today. :-)

IANAL, but it's my responsibility to check that Sun won't be violating
any copyright or licencing terms when delivering PostgreSQL with
(Open)Solaris.

I am now working on adding the regression tests ("gmake check") to the
8.3 packages integrated into OpenSolaris. While going through the list
of files I come across

src/test/regress/data/streets.data

This file includes ~5000 test data entries which appear to be
geographical locations for end points of streets etc. in the San
Francisco Bay Area.

I don't think whoever made this has typed it all in, nor does it look
like random data, it almost certainly comes from a real data
source. Which means someone probably owns the copyright.

This file was checked in way back in July 1996, by Marc G. Fournier
but that doesn't mean he was the one who got the data from
somewhere. Does anyone know where it comes from? Or has this
information been lost in the mist of time?

If it's a US Goverment source, then it's in the public domain and
we[1] can freely use it. Otherwise, at least in theory, we may have a
problem.

[1] "we" can here be read as either "PostgreSQL" or "Sun".

--
Bjorn Munch Sun Microsystems
Trondheim, Norway http://sun.com/postgresql/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Additional psql requirements

On Fri, Jul 25, 2008 at 9:36 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
> 2008/7/25 Dave Page <dpage@pgadmin.org>:
>> On Fri, Jul 25, 2008 at 8:52 AM, daveg <daveg@sonic.net> wrote:
>>
>>> It seems to me that a sql-like client side scripting language should be as
>>> similar as possible to plpgsql. Pgscript looks a bit like plpgsql, but is
>>> pretty much incompatible with it for no particularly obvious reason.
>>
>> pgScript originally used a c-like syntax when it was pgUnitTest iirc.
>> The new version is designed to be familiar to users of T-SQL.
>
> it is little bit unhappy - it's like T-SQL, but it isn't T-SQL - and
> it's far to plpgsql

I see no point in replicating pl/pgsql. Better to implement anonymous
blocks in the server for that.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [SQL] Arrays, types and prodedures

hello


2008/7/25 Chris Hoy <chris.hoy@hoyc.fsnet.co.uk>:
>
>
> Hi
>
> I am having trouble passing arrays of types to a store procedure
> I have the following type
>
> CREATE TYPE IndexElement AS (
> keyname text,
> keytype integer
> );
>
> and the header for the store prodedure is as follows:
>
> CREATE OR REPLACE FUNCTION doIndexGroupCount(
> indexs IndexElement[],
> customerid INTEGER ,
> logic INTEGER )
>
> but when I try and call it using the following code
>
> SELECT * from doIndexGroupCount(ARRAY[('dog', 1),('cat', 1)],10,0);
>
> I get and error
>
> ERROR: could not find array type for data type record
>
> What am I missing?
>

casting

postgres=# select ARRAY[('dog', 1),('cat', 1)]::indexelement[];
array
-----------------------
{"(dog,1)","(cat,1)"}
(1 row)

regards
Pavel Stehule

> Chris
>
>
>
>
>
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql
>

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Additional psql requirements

2008/7/25 Dave Page <dpage@pgadmin.org>:
> On Fri, Jul 25, 2008 at 8:52 AM, daveg <daveg@sonic.net> wrote:
>
>> It seems to me that a sql-like client side scripting language should be as
>> similar as possible to plpgsql. Pgscript looks a bit like plpgsql, but is
>> pretty much incompatible with it for no particularly obvious reason.
>
> pgScript originally used a c-like syntax when it was pgUnitTest iirc.
> The new version is designed to be familiar to users of T-SQL.

it is little bit unhappy - it's like T-SQL, but it isn't T-SQL - and
it's far to plpgsql

regards
Pavel Stehule

>
> --
> Dave Page
> EnterpriseDB UK: http://www.enterprisedb.com
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[SQL] Arrays, types and prodedures

Hi

I am having trouble passing arrays of types to a store procedure
I have the following type

CREATE TYPE IndexElement AS (
keyname text,
keytype integer
);

and the header for the store prodedure is as follows:

CREATE OR REPLACE FUNCTION doIndexGroupCount(
indexs IndexElement[],
customerid INTEGER ,
logic INTEGER )

but when I try and call it using the following code

SELECT * from doIndexGroupCount(ARRAY[('dog', 1),('cat', 1)],10,0);

I get and error

ERROR: could not find array type for data type record

What am I missing?

Chris

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [HACKERS] Additional psql requirements

On Fri, Jul 25, 2008 at 8:52 AM, daveg <daveg@sonic.net> wrote:

> It seems to me that a sql-like client side scripting language should be as
> similar as possible to plpgsql. Pgscript looks a bit like plpgsql, but is
> pretty much incompatible with it for no particularly obvious reason.

pgScript originally used a c-like syntax when it was pgUnitTest iirc.
The new version is designed to be familiar to users of T-SQL.

--
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] BUG #4319: lower()/upper() does not know about UNICODE case mapping

Valentine Gogichashvili napsal(a):

> Hi,

Hi

> I understand, that it is more a feature, but it does not help me anyways...
>
> On the UNICODE databases lower and upper functions are using system locale
> settings (that cannot be changed after initializing DB?) and does not know
> anything about UNICODE case mapping.
>
> The problem really becomes 'a problem' on multilingual systems. I have to
> store data for German, Russian and Romanian languages together.

I understand you and I hope things will be better. See collation per database
project: http://wiki.postgresql.org/wiki/Gsoc08-collation

Zdenek


--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] High activity short table and locks

Guillaume Bog wrote:
> Hello,
>
> My first impression is that vacuuming the offending table very often helps a
> lot. I'm doing it by hand for now but I will have a cronjob for this. By the
> way, it seems I don't need thoses indexes anymore. Thanks a lot for your
> helpful advices.

Excellent! Oh, when you set up your cron-job, it's probably better to
vacuum this sort of table too often rather than not enough.

--
Richard Huxton
Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] High activity short table and locks

Hello,

My first impression is that vacuuming the offending table very often helps a lot. I'm doing it by hand for now but I will have a cronjob for this. By the way, it seems I don't need thoses indexes anymore. Thanks a lot for your helpful advices.

On Thu, Jul 24, 2008 at 6:19 PM, Richard Huxton <dev@archonet.com> wrote:
Guillaume Bog wrote:
On Wed, Jul 23, 2008 at 11:17 PM, Richard Huxton <dev@archonet.com> wrote:

I tried a vacuum full and had to stop it as it was blocking the server for
too long. Below is the partial results I got. It seems you are right:
enormous amount of dead space and rows. I did the same vacuum later and it
seems to have improved a lot the performance. I need to check again
tomorrow.

We don't have autovacuum, but as it seems autovacuum cannot target a
specific table, I may prefer do it by cron every minute, as you suggest.

There's a pg_autovacuum system table that lets you tune things table-by-table. See the manual for details. In your case, a manual vacuum once a minute will be a huge step forward anyway.


vf_cn2fr=# VACUUM FULL VERBOSE lockers ;
INFO:  vacuuming "public.lockers"
INFO:  "lockers": found 4228421 removable, 107 nonremovable row versions in
64803 pages

Well, that table is at least 1000 times larger than it needs to be.

If you've run a VACUUM FULL, you'll want to run REINDEX on all the indexes on that table too.


64803 pages containing 512643700 free bytes are potential move destinations.

Ouch! that's a 500MB table holding 100 live rows.


You could fiddle around setting up ramdisks and pointing tablespaces there,
but I'm not sure it's worth it.

If it is possible to have no WAL at all on this table, I'd prefer to try it.
It seems completely useless and is probably taking a fair amount of i/o.

It's a bit early to be sure if the solution is there, but I feel you already
did throw some good light on my dark path, I have to thank you for that.

Afraid not. The synchronous_commit setting can reduce the disk I/O though.

--
 Richard Huxton
 Archonet Ltd

Re: [HACKERS] pltcl_*mod commands are broken on Solaris 10

Tom Lane napsal(a):
> Zdenek Kotala <Zdenek.Kotala@Sun.COM> writes:
>> I understand. However I have another dumb idea/question - It seems to me that it
>> is client code. I think that it should be integrated into psql
>> command.
>
> That doesn't seem like a particularly appropriate thing to do ... nor
> do I see the argument for calling it client-side code.
>

I think that best thing at this moment is to add item to the TODO list about
cleanup.

Zdenek


--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] php + postgresql

> You need something like this:
>
> $query = "select id, name from tablename";
> $result = pg_query($query);
> while ($row = pg_fetch_array($result)) {
> $content = $row[0];
> }

That's actually what I was using.
The scoping wasn't the issue either.

Today I switched back to pg_connect() from pg_pconnect(), made some
changes to my overall architecture and re-wrote my database stuff. Then
re-booted.

Not sure what fixed it but all working now. I'm only working on a draft
"skeleton" right now so am free to fiddle.

Keep finding cool features in PostgreSQL, I think I'm sold!

Thanks
Mick

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Additional psql requirements

2008/7/25 Simon Riggs <simon@2ndquadrant.com>:
>
> On Fri, 2008-07-25 at 10:00 +0900, ITAGAKI Takahiro wrote:
>> Simon Riggs <simon@2ndquadrant.com> wrote:
>>
>> > * access to version number
>> > * simple mechanism for conditional execution
>> > * ability to set substitution variables from command execution
>> > * conditional execution whether superuser or not
>>
>> Can we use pgScript for such flow controls?
>> http://pgscript.projects.postgresql.org/INDEX.html
>>
>> I'm not sure pgScript can be used in pgAdmin already, but if we support
>> it both psql and pgAdmin, the scripting syntax will be a defact standard
>> because they are the most major user interfaces to postgres. I think it is
>> not good to add another "dialect" that can be used only in psql.
>
> I just want good way, not two imperfect ones.
>
> And I'm not going to suggest having pgscript in core.
+ 1

pgScript is too heavy

for most purposes is enough some like

\for select * from information_schema.tables
grant read on $1 to public;
\endfor

regards
Pavel Stehule

>
> --
> Simon Riggs www.2ndQuadrant.com
> PostgreSQL Training, Services and Support
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Additional psql requirements

On Fri, Jul 25, 2008 at 08:16:59AM +0100, Simon Riggs wrote:
>
> On Fri, 2008-07-25 at 10:00 +0900, ITAGAKI Takahiro wrote:
> > Simon Riggs <simon@2ndquadrant.com> wrote:
> >
> > > * access to version number
> > > * simple mechanism for conditional execution
> > > * ability to set substitution variables from command execution
> > > * conditional execution whether superuser or not
> >
> > Can we use pgScript for such flow controls?
> > http://pgscript.projects.postgresql.org/INDEX.html
> >
> > I'm not sure pgScript can be used in pgAdmin already, but if we support
> > it both psql and pgAdmin, the scripting syntax will be a defact standard
> > because they are the most major user interfaces to postgres. I think it is
> > not good to add another "dialect" that can be used only in psql.
>
> I just want good way, not two imperfect ones.
>
> And I'm not going to suggest having pgscript in core.

It seems to me that a sql-like client side scripting language should be as
similar as possible to plpgsql. Pgscript looks a bit like plpgsql, but is
pretty much incompatible with it for no particularly obvious reason.

-dg

--
David Gould daveg@sonic.net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCHES] GIN improvements

> (a) that's not back-patchable and (b) it'll create a merge conflict with
> your patch, if you're still going to add a new AM function column.
> I think that aminsertcleanup per se isn't needed, but if we want an
> "amanalyze" there'd still be a conflict. Where are we on that?

I'll revert aminsertcleanup framework but leave gininsertcleanup function as is,
because I'll not have enough time until end of summer - I'd like to finalize
patch and fixes first.

--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Review: DTrace probes (merged version) ver_03

Theo Schlossnagle napsal(a):
>
> On Jul 24, 2008, at 11:11 AM, Zdenek Kotala wrote:
>
>> I performed review and I prepared own patch which contains only probes
>> without any issue. I suggest commit this patch because the rest of
>> patch is independent and it can be committed next commit fest after
>> rework.
>>
>> I found following issues:
>>
>> 1) SLRU probes.
>>
>> I think it is good to have probes there but they needs polish. See my
>> comments
>> http://reviewdemo.postgresql.org/r/25/
>
> The slru's are quite useful and general enough to use easily. I used
> them to verify the metered checkpointing stuff:
>
> http://lethargy.org/~jesus/archives/112-Probing-for-Success.html

I agree that SLRU probes are useful but I'm worry about implementation. I think
that these probes need more work before commit. Currently there are several bugs
in placement and arguments (from my point of view).

>> 3) Executor probes
>>
>> I would like to see any use case for them/
>
> I added them with two thoughts (and knowing that they cost nothing).
> (1) you can trace them to assist in debugging an explain plan and to
> better understand the flow of the execution engine. This is not a
> compelling reason, but a reason none-the-less.
> (2) you can trace and existing long-running query for which you do not
> have the original plan (may have changed) and make an educated guess at
> the plan chosen at time of execution.

I'm not executor expert and (1) is useful for me :-). What I'm thinking about is
if we can mine more information from executor like number of tuples processed by
node number and so on. I think that it needs discussion.

>> 8) mark dirty and BM_HINT... flag
>>
>> I remove these because I don't see any use case for it. It would be
>> nice provide some dtrace script or describe basic ideas.
>
>
> Perhaps I misunderstood what mark dirty does, but here was my thinking:
>
> Because of the background writer, it is difficult to understand which
> postgres process (and thus query) induced disk writes. Marking a page
> as dirty is a good indication that a query will be causing I/O and you
> can measure calls to mark dirty per query as a telling metric.
>
> Perhaps I misunderstood, but I have a very serious problem that I can't
> reliably track write I/O to postgresql process ID as the bgwriter and
> the kernel are flushing those dirty blocks to disk while the process
> isn't running. In my (albeit naive) tests, the mark dirty gave me quite
> expected results for correlating query execution to disk I/O to be induced.
>

If I understand correctly you need to analyze number of writes per
query/session. It seems to me, that to use mark dirty is good way, but it
probably needs more probes. (Robert L. any idea?)

However what I suggested is commit probes without issue now and the rest will be
processed on the next commit fest after rework/discussion.

Zdenek

--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Uncopied parameters on CREATE TABLE LIKE

On Thu, 2008-07-24 at 11:41 -0400, Tom Lane wrote:

> Now, if you're suggesting we need a plugin hook somewhere in or around
> default_reloptions, that's possibly reasonable; but a GUC like you're
> suggesting seems quite pointless.

OK, I'll have a look, or perhaps Itagaki?

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] invalid byte sequence for encoding "UNICODE"

On Jul 24, 8:06 pm, m...@alanny.ru (AlannY) wrote:
> Hi there.
>
> Many times, I'm confronting with that strange problem: invalid byte
> sequence for encoding "UNICODE". So, I guess, Postgresql can't allow me
> to use some symbols which is not a part of UNICODE. But what is that
> symbals?
>
> I'm attaching a screenshot with THAT dead-symbol. As you can see - it's
> an unknown symbol in the end of Cyrillic. First of all, I have checked
> my data with iconv (iconv -f UTF-8 -t UTF-8 data.txt) and there are no
> errors, so, I guess, there are no dead-symbols.
>
> So the question is: is it possible to find a *table* with forbitten
> characters for encoding "UNICODE"? If I can get it -> I can kill that
> dead-characters in my program ;-)
>
> Thank you.
>
> --
> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general

To say the truth, there are no characters, forbidden in UNICODE as
there are no characters, that you can have, that are not in UNICODE.
The other thing is UTF8, that encodes real UNICODE into 8bit byte
sequence. There errors occur.

What does the command:

show lc_ctype;

show?

As Tom has said, more information about your system would be really
handy...

With best regards,

-- Valentine Gogichashvili

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-es-ayuda] Alternativas a infraestructuras de escalabilidad

Saludos a todos

Ultimamente he estado experimentando con los diversos programas que MySQL ofrece para ayudar a la escalabilidad de grandes sites, en concreto la replicación y el balanceo.

Me gustaría saber qué alternativas tenemos para PostgreSQL (y que las alternativas sean estables y estén en producción en algun site).

Concretamente, me gustarían las correspondencias para

  • Replicación (MySQL la lleva nativa)
  • Balanceo (en MySQL disponen de MySQL Proxy y del MySQL Load Balancer)


Gracias

--
Xavier Vidal Piera
Enginyer Tècnic Informàtic de Gestió
Tècnic Especialista Informàtic d'equips
xavividal@gmail.com
xvidal@lavanguardia.es
http://web.xaviervidal.net
610.68.41.78

Re: [HACKERS] pg_dump vs data-only dumps vs --disable-triggers

On Thu, 2008-07-24 at 19:11 -0400, Tom Lane wrote:
> There's some fairly squirrely logic in pg_dump/pg_restore that tries to
> detect whether it's doing a data-only operation, ie, no schema
> information is to be dumped or restored. The reason it wants to
> know this is to decide whether to enable the --disable-triggers
> code. However, since --disable-triggers is off by default and has
> to be manually requested, I'm not sure why we've got all this extra
> complexity in there. (Actually, I'm sure the reason is that that
> code predates the existence of the --disable-triggers switch, but
> anyway...)
>
> Simon's patch to split up --schema-only into two switches has broken
> this logic, but I'm inclined to just rip it out rather than trying
> to fix it. If the user says --disable-triggers, he should get
> trigger disable commands around the data part of the dump, no matter
> what he said or didn't say about schema dumping.

Agreed. Thanks for the clear thinking.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Additional psql requirements

On Fri, 2008-07-25 at 10:00 +0900, ITAGAKI Takahiro wrote:
> Simon Riggs <simon@2ndquadrant.com> wrote:
>
> > * access to version number
> > * simple mechanism for conditional execution
> > * ability to set substitution variables from command execution
> > * conditional execution whether superuser or not
>
> Can we use pgScript for such flow controls?
> http://pgscript.projects.postgresql.org/INDEX.html
>
> I'm not sure pgScript can be used in pgAdmin already, but if we support
> it both psql and pgAdmin, the scripting syntax will be a defact standard
> because they are the most major user interfaces to postgres. I think it is
> not good to add another "dialect" that can be used only in psql.

I just want good way, not two imperfect ones.

And I'm not going to suggest having pgscript in core.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers