---Michael Brewer
mbrewer@gmail.com
--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board
PostgreSQL: The world's most advanced open source database Discussion.
---Michael Brewer
mbrewer@gmail.com
--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board
Thanks for the feedback! Test-results like that are always welcome
on-list, too. :)
Regards
Erwin
--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support
> So everything you mentioned is either recently added/documented or being
> actively worked on somewhere, and the first two were things I worked on
> myself after noticing they were missing. Believe me, I feel the items
> that still aren't there, but they're moving along at their own pace.
> There's already more tuning knowledge available than tools to help apply
> that knowledge to other people's systems, which is why I think a diversion
> to focus just on that part is so necessary.
But as an administrator/developer, I don't understand the focus on a new
API for "writing my config" for me... I'ld love a tool that helped me
"analyze" my current running PG database (and yes, that includes getting
*current* settings), and "suggest" config changes, ideally in order that
the tool thinks will make a difference... I can make that change, and
distribute it. That's the easy part. If I really trust the tool then
I'll just blindly run it (depending on it's output format):
$TOOL > $PGDATA/postgresql.conf
in which case, I don't care if it groked any of my previous comments and
cruft. Otherwise, I'll look at it, and integrate some (or all) of the
changes into postgresql.conf using my preferred method of
commenting/SCM/quirks.
a.
--
Aidan Van Dyk Create like a god,
aidan@highrise.ca command like a king,
http://www.highrise.ca/
work like a slave.
Simon Riggs <simon@2ndquadrant.com> writes:
> I've got a test case which shows something related and weird, though not
> the exact case.
> The queries shown here have significantly different costs, depending
> upon whether we use tables a or b in the query. Since a and b are
> equivalent this result isn't expected at all.
Hmm. I had been guessing that there was something about your original
query that prevented the system from applying best_appendrel_indexscan,
but after fooling with this a bit, I don't believe that's the issue
at all. The problem is that these two cases "should" be equivalent:
select ... from a join b on (a.id = b.id) left join c on (a.id = c.id);
select ... from a join b on (a.id = b.id) left join c on (b.id = c.id);
but they are not seen that way by the current planner. It correctly
forms an EquivalenceClass consisting of a.id and b.id, but it cannot put
c.id into that same class, and so the clause a.id = c.id is just left
alone; there is noplace that can generate "b.id = c.id" as an
alternative join condition. This means that (for the first query)
we can consider the join orders "(a join b) leftjoin c" and
"(a leftjoin c) join b", but there is no way to consider the join
order "(b leftjoin c) join a"; to implement that we'd need to have the
alternative join clause available. So if that join order is
significantly better than the other two, we lose.
This is going to take a bit of work to fix :-(. I am toying with the
idea that we could go ahead and put c.id into the EquivalenceClass
as a sort of second-class citizen that's labeled as associated with this
particular outer join --- the implication being that we can execute the
outer join using a generated clause that equates c.id to any one of the
first-class members of the EquivalenceClass, but above the outer join
we can't assume that c.id hasn't gone to null, so it's not really equal
to anything else in the class. I think it might also be possible
to get rid of the reconsider_outer_join_clauses() kluge in favor of
driving transitive-equality-to-a-constant off of this representation.
However there's a larger issue here, which is the very identity of an
outer join :-(. Currently, for the first query above, the left join
is defined as being between a and c, with a being the minimum
left-hand-side needed to form the join. To be able to handle a case
like this, it seems that the notion of a "minimum left hand side"
falls apart altogether. We can execute the OJ using either a or b
as left hand side. So the current representation of OuterJoinInfo,
and the code that uses it to enforce valid join orders, needs a serious
rethink.
Looks like 8.4 development material to me, rather than something we
can hope to back-patch a fix for...
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Yeah, I've got maybe 200 left, which isn't going to be enough for the
next set of folders.
--Josh
--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy
PgDesigner es de pago al igual que CaseStudio porque no usas |CaseStudio mejor creo q es mas barato
Y no tienes q hacer magia para conectarlo con Postgres porque ya viene instalado para este.
Debemos de rechazar estos softweares q no traen soporte para Postgres nativamente.
*-------------------------------------------------------*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-849-8087
* " Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo comun"
*-------------------------------------------------------*
________________________________
Date: Wed, 4 Jun 2008 19:10:47 -0600
From: viniciojr@gmail.com
To: pgsql-es-ayuda@postgresql.org
Subject: [pgsql-es-ayuda] DBDesigner 4
_________________________________________________________________
Alguien por aqui ha podido conectar o utiliza el DBDesigner 4 con postgres. no encuentro informacion al respecto y el programa no trae soporte para postgres en forma nativa. no si hay algun plugin que colabore con la funcion????
--
-----------------------
MVJR
Stop squinting -- view your photos on your TV. Learn more.
http://www.microsoft.com/windows/digitallife/default.mspx?deepLink=photos--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
(envía "unregister TuDirecciónDeCorreo" a majordomo@postgresql.org)
> * Are backends always writing out dirty buffers because there are no free
> ones? This might mean tweaking settings affecting bgwriter.
What you mean on the first one is "are backends always writing out dirty
buffers becuase there are no *clean* ones"; the server operates with no
*free* buffers as standard operations. Figuring that out is now easy in
8.3 with the pg_stat_bgwriter view.
> * Are the evicted buffers ones with really high usage counts? This
> might mean an increase shared buffers would help?
Evicted buffers must have a 0 usage count. The correct question to ask is
"are buffers never getting high usage counts because they keep getting
evicted too fast?". You can look at that in 8.3 using pg_buffercache,
I've got suggested queries as part of my buffer cache presentation at
http://www.westnet.com/~gsmith/content/postgresql/
> * Are we always spilling small amounts of data to disk for sorting? A
> a small work_mem increase might help...
I was just talking to someone today about building a monitoring tool for
this. Not having a clear way to recommend people monitor use of work_mem
and its brother spilled to disk sorts is an issue right now, I'll whack
that one myself if someone doesn't beat me to it before I get time.
> * Are all our reads from disk really quick? This probably means OS
> pagecache has our whole DB, and means random_page_cost could be
> tweaked?
This is hard to do with low overhead in an OS-independant way. The best
solution available now would use dtrace to try and nail it down. There's
movement in this area (systemtap for Linux, recent discussion at the PGCon
Developer Meeting of possibly needing more platform-specific code) but
it's not quite there yet.
So everything you mentioned is either recently added/documented or being
actively worked on somewhere, and the first two were things I worked on
myself after noticing they were missing. Believe me, I feel the items
that still aren't there, but they're moving along at their own pace.
There's already more tuning knowledge available than tools to help apply
that knowledge to other people's systems, which is why I think a diversion
to focus just on that part is so necessary.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
*-------------------------------------------------------*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-849-8087
* " Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo comun"
*-------------------------------------------------------*
________________________________
Date: Wed, 4 Jun 2008 19:10:47 -0600
From: viniciojr@gmail.com
To: pgsql-es-ayuda@postgresql.org
Subject: [pgsql-es-ayuda] DBDesigner 4
Alguien por aqui ha podido conectar o utiliza el DBDesigner 4 con postgres. no encuentro informacion al respecto y el programa no trae soporte para postgres en forma nativa. no si hay algun plugin que colabore con la funcion????
--
-----------------------
MVJR
_________________________________________________________________
Stop squinting -- view your photos on your TV. Learn more.
http://www.microsoft.com/windows/digitallife/default.mspx?deepLink=photos--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
(envía "unregister TuDirecciónDeCorreo" a majordomo@postgresql.org)
por ejemplo:
select to_tsvector('spanish', 'nada es igual') @@ to_tsquery('spanish', 'nada');
select 'nada es igual'::tsvector @@ 'nada'::tsquery as resultado;
el primero indica un notice que dice "la consulta de búsqueda en texto
contiene sólo stopwords o no contiene lexemas; ignorada" y entrega un
false
pero el segundo no indica ningun error o notice y me entrega un true.
a cual debo creer ???
saludos.-
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?
*-------------------------------------------------------*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-849-8087
* " Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo comun"
*-------------------------------------------------------*
> From: gergonvel@hotmail.com
> To: pgsql-es-ayuda@postgresql.org
> Subject: [pgsql-es-ayuda] llamar a funcion
> Date: Wed, 4 Jun 2008 18:21:44 -0400
>
>
> Hola soy SuperNovato y queria pedirles su ayuda. Como puedo llamar a una funcion dentro de otra funcion,CREATE OR REPLACE FUNCTION p_funcion2 ( par1 in varchar , par2 in char , par3 out money )RETURNS record ASBEGIN par3:=45000;end;CREATE OR REPLACE FUNCTION p_funcion1 ( par1 in varchar , par2 in char , par4 out money )RETURNS record ASdeclareBEGIN select par3 into par4 from p_funcion2 (par1, par2);end;es asi o la llamada a la funcion se hace con un "execute".o de otra manera. De anmano graciasGerardo.
> _________________________________________________________________
> Descarga ya gratis y vive la experiencia Windows Live.
> http://www.descubrewindowslive.com/latam/index.html--
> TIP 5: ¿Has leído nuestro extenso FAQ?
> http://www.postgresql.org/docs/faqs.FAQ.html
_________________________________________________________________
Send funny voice messages packed with tidbits from MSN. Everyone wants to be ready.
http://www.noonewantstolookdumb.com?OCID=T001MSN54N1613A--
TIP 7: no olvides aumentar la configuración del "free space map"
> (that's in the OpenOffice Calc format if you don't know the extension)
On Jun 4, 2008, at 6:20 PM, Steve Atkins wrote:
> I'd be interested in putting together a framework+GUI client to do
> this
> cleanly in a cross-platform (Windows, Linux, Solaris, OS X as a bare
> minimum) sort of way, if no-one else already has such a thing.
/me makes go together motions, if nobody has any objection
Cheers,
Steve
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Jun 4, 11:11 am, dp...@pgadmin.org ("Dave Page") wrote:
> Due to an ugly bug found by Erwin, I've updated the new release to
> v1.8.4, and uploaded new builds in src, win32, osx and slackware
> formats tohttp://developer.pgadmin.org/ftp/release/v1.8.4.
>
> Please give them a once-over, and hopefully I can announce them tomorrow!
Testing pgAdmin v.1.8.4 (Jun 4 2008, rev: 7358, on WinXP Pro; hosts:pg
8.2.7 and 8.3.1 on Debian Etch
I have tested on Win XP for like an hour. The crash seems fixed,
nothing more to stop the show AFAIKS. :)
Just a minor bug:
I say:
CREATE TYPE foo AS
(x timestamp(0) without time zone);
pgAdmin says:
CREATE TYPE foo AS
(x timestamp without time zone(0));
The later should read like the former to work.
CREATE TABLE does not suffer from this problem.
I take it, these timestamp types are a pest to reverse engineer ...
Regards
Erwin
--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers
> Tom Lane wrote:
>> * Can we build a "configuration wizard" to tell newbies what settings
>> they need to tweak?
>
> That would trump all the other suggestions conclusively. Anyone good at
> expert systems?
Sigh. I guess we need to start over again.
Last year around this time, there was one of the recurring retreads of
this topic named "PostgreSQL Configuration Tool for Dummies":
http://archives.postgresql.org/pgsql-performance/2007-06/msg00386.php
Josh Berkus pointed out that he already had the "expert system" part of
this problem solved pretty well with a spreadsheet:
http://pgfoundry.org/docman/view.php/1000106/84/calcfactors.sxc (that's in
the OpenOffice Calc format if you don't know the extension)
That particular spreadsheet has more useful tuning suggestions in this
area than 99.9% of PostgreSQL users have or will ever know. You can
nitpick the exact recommendations, but the actual logic and thinking
involved is pretty well solved. It could use a touch of tweaking and
modernization but it's not too far off from being as good as you're likely
to get at making guesses without asking the user too many questions.
There is one ugly technical issue, that you can't increase shared_buffers
usefully in many situations because of SHMMAX restrictions, and that issue
will haunt any attempt to be completely automatic.
Where Josh got hung up, where I got hung up, where Lance Campbell stopped
at with his Dummies tool, and what some unknown number of other people
have been twarted by, is that taking that knowledge and turning it into a
tool useful to users is surprisingly difficult. The reason for that is
the current postgresql.conf file and how it maps internally to GUC
information isn't particularly well suited to automated generation,
analysis, or updates. I think Josh got lost somewhere in the parsing the
file stage. The parts I personally got stuck on were distinguishing
user-added comments from ones the system put in, plus being completely
dissatisfied with how lossy the internal GUC process was (I would like a
lot more information out of pg_settings than are currently there).
Lance's helper tool was hobbled by the limitations of being a simple web
application.
That's the background to Josh's proposal. It has about an 80% overlap
with what I was working on suggesting, which is why I jumped on his
bandwagon so fast. The outline at
http://wiki.postgresql.org/wiki/GUCS_Overhaul includes the superset of our
respective thinking on the first step here toward straightening out this
mess, further expanded with observations made in this thread.
I would respectively point out that comments about the actual tuning
itself have no bearing whatsoever on this proposal. This is trying to
nail down all the features needed to support both doing an initial
generation and subsequent incremental improvements to the postgresql.conf
file, while also reducing some redundancy in the code itself. Reducing
the scope to only handling initial generation would make this a smaller
task. But it happens to fall out that the work required to cut down on
the redundancy and that required to better support incremental updates as
well happen to be almost the same. Josh's stated agenda is to get this
right in one swoop, with only one version worth of disruption to the
format, and that goal is served better IMHO as well by addressing all
these changes as one batch.
I will attempt to resist further outbursts about non-productive comments
here, and each time I am tempted instead work on prototyping the necessary
code I think this really needs instead.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> Andrew Dunstan <andrew@dunslane.net> writes:
>> Tom Lane wrote:
>>> * Can we build a "configuration wizard" to tell newbies what
>>> settings
>>> they need to tweak?
>
>> That would trump all the other suggestions conclusively. Anyone
>> good at
>> expert systems?
>
> How far could we get with the answers to just three questions:
>
> * How many concurrent queries do you expect to have?
>
> * How much RAM space are you willing to let Postgres use?
>
> * How much "overhead" disk space are you willing to let Postgres use?
>
> concurrent queries drives max_connections, obviously, and RAM space
> would drive shared_buffers and effective_cache_size, and both of them
> would be needed to size work_mem. The third one is a bit weird but
> I don't see any other good way to set the checkpoint parameters.
>
> If those aren't enough questions, what else must we ask? Or maybe
> they
> aren't the right questions at all --- maybe we should ask "is this a
> dedicated machine or not" and try to extrapolate everything else from
> what we (hopefully) can find out about the hardware.
I'd be interested in putting together a framework+GUI client to do this
cleanly in a cross-platform (Windows, Linux, Solaris, OS X as a bare
minimum) sort of way, if no-one else already has such a thing.
A framework doesn't get you all the way there, but it makes it a
whole lot easier to work on what base data and information you
need, and how easy it is to map pgsql-performance and #postgresql
gut feel onto something more algorithmic.
Cheers,
Steve
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
For these three things:
1) max connections/workmem
2) shared buffers/effective cache
3) bgwriter/checkpoint parameters
What are the stats or measures that PostgreSQL produces (or that you
want it too currently doesn't, but you would like it to produce) that
the "masters" (i.e. people who tune PostgreSQL effectively, like you,
Greg, Simon, Robert, Peter, Josh, Jim, etc - sorry if I missed others)
actually use to decide whether to increase or decrease a value?
I tune my postgresql.conf mainly on folklore, and "assimilated
understanding" from reading the lists and blogs... But I haven't come
across (or rather, haven't come across and remembered/bookmarked)
anything that helps someone sample/read any stats or counts to find
bottleneck points which to start tuning.
Stuff like (remembering that I've never had to really work at tuning
because PG has always been "fast enough" for my needs, so take this with
a grain of salt)
* Are backends always writing out dirty buffers because there are no free
ones? This might mean tweaking settings affecting bgwriter.
* Are the evicted buffers ones with really high usage counts? This
might mean an increase shared buffers would help?
* Are we always spilling small amounts of data to disk for sorting? A
a small work_mem increase might help...
* Are all our reads from disk really quick? This probably means OS
pagecache has our whole DB, and means random_page_cost could be
tweaked?
If we could get a definitive list of things like this, or maybe just
comprehensive, or even at least agreed-to-not-be-wrong things to look
at, that would go a long way to documentation *how* to tune PG
effectively, and could lead to any projects that want to tackle
examining a running cluster and suggesting some config changes...
a.
--
Aidan Van Dyk Create like a god,
aidan@highrise.ca command like a king,
http://www.highrise.ca/
work like a slave.
I think the best way to explain this is with a timeline of two
concurrent sessions, s1 and s2.
s1: BEGIN;
s2: BEGIN;
s1: UPDATE db SET b = data WHERE a = key; -- matches no rows
s2: UPDATE db SET b = data WHERE a = key; -- matches no rows
s1: INSERT INTO db(a,b) VALUES (key, data); -- inserts with a = key
s1: COMMIT;
s2: INSERT INTO db(a,b) VALUES (key, data); -- unique violation!
Notice that neither of the updates block, because neither match any
rows, so there is no conflict.
The exception handling in the loop in the example then retries s2
entirely, which then (correctly) updates the tuple rather than
inserting. There's some degenerate case, I suppose, when sessions are
perfectly synchronized with DELETEs such that it causes an infinite
loop, but that's a pretty unrealistic scenario.
SERIALIZABLE transactions don't really affect this, because the updates
still don't match any rows. Serializable transactions really only affect
the snapshot that you see and whether an UPDATE/DELETE causes a
serialization error (which can only happen if they match some rows).
The thing about a relation constraint (like UNIQUE) is that two
completely separate tuples can conflict with each other. That requires a
relation-level synchronization mechanism, because it can't assure that
the constraint is satisfied by examining tuples (or any proper subsets
of the relation) independently.
The general way to implement a relation constraint is by using LOCK
TABLE to prevent other concurrent sessions from interfering (as you
suggest above). This obviously has very bad performance, which is why
UNIQUE indexes provide another synchronization mechanism at the sub-
transaction level.
Regards,
Jeff Davis
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> Do be more specific, we have an heavy loaded server with SCSI disks
> (RAID 0 on a SAS controller), making a total of 500GB. Actually, there
> are 16GB RAM, representing about 2,5% of db size.
That's a reasonable ratio. Being able to hold somewhere around 1 to 5% of
the database in RAM seems to common nowadays, and that works OK. But it's
impossible to have a hard "rule" here because the working set needed to
operate the queries and other activities on your server is completely
dependant on the code you're running, your performance expectations, and
your expected user load.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Yo te aconsejo que lo tengas las sentencias que escribiste en una
función y llamas a la función utilizando la clase CallableStatement...
dale una leida a la documentación de JDBC en:
http://jdbc.postgresql.org/documentation/83/callproc.html
--
Saludos y abrazos...
Marco Antonio Frias Butrón
Slackware Linux User
Linux Registered User #356229 - http://counter.li.org/
--
TIP 6: �Has buscado en los archivos de nuestra lista de correo?
That seems a bit overdesigned. What are the prospects that two builds
of the same Postgres version are going to have different sets of
resource managers in them?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> It currently opens on the CD's localhost which contains the introduction information.
And, as David has been rightly yelping, that's the only place an autoplay
setup should ever go--to the local CD or USB key that has been inserted.
If someone wants to add a prominent link to that intro information that is
clearly stated to lead to the main PostgreSQL site, I don't think anybody
would complain if following that link took you to a special page or used a
parameter to access the page that showed someone got there via the
distributed key. That's as intrusive as even a give-away should ever get
if you don't want people get a bad taste of your project.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy
> The subtle issue here is that what makes sense for the database
> configuration changes over time; there's not just one initial generation
> and you're done. postgresql.conf files can end up moving from one machine
> to another for example. I think something that doesn't recognize that
> reality and move toward a "tune-up" capability as well as initial
> generation wouldn't be as useful,
As I just mentioned to someone else, I don't see any point in framing it
as an "initial generation" problem at all. initdb will already give you
settings that work, for some value of "work". The config wizard really
only needs to deal with the case of tuning an existing installation.
> and that's where putting the settings
> inside the database helps so much.
How does it help, pray tell? If you mean being able to see what the
existing settings are, pg_settings already does that.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
As I said: upcoming release. (1.8.3 was skipped because of a bug.)
Dave is going to announce it soon if no problems are found. I am testing
right now.
Yes, it should be a good release. Download it to be one of the first to
have it - and test it. :)
Regards
Erwin
--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support
> That would trump all the other suggestions conclusively. Anyone good at
> expert systems?
How far could we get with the answers to just three questions:
* How many concurrent queries do you expect to have?
* How much RAM space are you willing to let Postgres use?
* How much "overhead" disk space are you willing to let Postgres use?
concurrent queries drives max_connections, obviously, and RAM space
would drive shared_buffers and effective_cache_size, and both of them
would be needed to size work_mem. The third one is a bit weird but
I don't see any other good way to set the checkpoint parameters.
If those aren't enough questions, what else must we ask? Or maybe they
aren't the right questions at all --- maybe we should ask "is this a
dedicated machine or not" and try to extrapolate everything else from
what we (hopefully) can find out about the hardware.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> Well, the stuff included into the dump by pg_dump -v is informative,
> too. But we stopped doing that by default because of complaints.
> I remain unconvinced that this proposal won't suffer the same fate.
I think it would be reasonable to only include the list of options used in
the dump if you use one that changes what appears in the dump. That way,
you wouldn't see anything by default. But if you make a modification that
will likely break a diff with an existing dump done with the default
parameters, the option change that introduced that should show at the very
beginning.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> The real problem we need to solve is how to allow newbies to have the
> system auto-configured to something that more or less solves their
> problems. Putting the config settings in XML does not accomplish that,
> and neither does putting them inside the database.
The subtle issue here is that what makes sense for the database
configuration changes over time; there's not just one initial generation
and you're done. postgresql.conf files can end up moving from one machine
to another for example. I think something that doesn't recognize that
reality and move toward a "tune-up" capability as well as initial
generation wouldn't be as useful, and that's where putting the settings
inside the database helps so much.
Also, there's a certain elegance to having a optimization tool that works
again either a new installation or an existing one. I personally have
zero interest in a one-shot config generator. It just doesn't solve the
problems I see in the field. Performance starts out just fine even with
the default settings when people first start, and then goes to hell after
the system has been running for a while (and possibly moved to another
machine). By that point nobody wants to mess with their configuration
file unless it's one simple change at a time.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> It's certainly one thing to create an initial postgresql.conf from
> scratch after some inquiry, but a different level of problems to deal
> with when offering to change the settings. IMHO initial creation isn't
> enough, users will feel even more left alone if there are no tools
> helping them further. I guess most users will start tweaking after the
> server is already running for a while, with some config already in place.
Indeed, the wizard should be designed to assist with tweaking an
existing installation. I see no value at all to trying to run it
before/during initdb.
> That's when file format and/or APIs come into play. Preserving comments
> and/or using them in a wizard isn't too easy with the current format.
[ shrug... ] We have already wasted more effort arguing about this
issue than it would take to do something good enough for a wizard
tool. The great thing about simple text formats is that they're pretty
robust. I think all we need to do is comment out any existing
setting(s) of the target variable and add the new setting before the
first such, perhaps with a comment indicating what added it and when.
If the user finds this too ugly, he can tweak it with a text editor.
This is even assuming that the tool needs to edit the file itself,
rather than just give advice. The advice is the hard part, folks;
could we stop obsessing about trivia?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I would greatly appreciate any advice on how to read the following
output from EXPLAIN ANALYZE.
When running the query the first time from a fresh restore it takes
about 55 sec.
On a second run is take about 2 sec.
After the vacuum it takes about 36 sec no matter how many times I run it.
My application is growing rapidly in both size and complexity. I really
need to get a handle on this soon.
*Query run from a fresh restore*
"Sort (cost=242145.26..242145.27 rows=1 width=1200) (actual
time=55100.785..55100.820 rows=158 loops=1)"
" Sort Key: (sum(pipe0_1_.numpieces)), (sum(pipe0_.numfeet))"
" Sort Method: quicksort Memory: 52kB"
" -> HashAggregate (cost=242145.24..242145.25 rows=1 width=1200)
(actual time=55099.970..55100.194 rows=158 loops=1)"
" -> Nested Loop (cost=0.00..242145.20 rows=1 width=1200)
(actual time=391.496..55092.841 rows=427 loops=1)"
" Join Filter: (pipe0_.popipe_id = popipe1_.id)"
" -> Nested Loop (cost=0.00..240701.08 rows=1 width=684)
(actual time=2.543..-2171.666 rows=3174 loops=1)"
" -> Nested Loop (cost=0.00..240700.46 rows=1
width=676) (actual time=2.486..-1946.110 rows=3174 loops=1)"
" -> Nested Loop (cost=0.00..240699.93 rows=1
width=644) (actual time=2.431..-1462.137 rows=3174 loops=1)"
" -> Nested Loop (cost=0.00..240699.56
rows=1 width=136) (actual time=2.357..-2010.958 rows=3174 loops=1)"
" -> Nested Loop
(cost=0.00..240698.36 rows=1 width=128) (actual time=2.279..-1534.495
rows=3174 loops=1)"
" -> Nested Loop
(cost=0.00..240698.08 rows=1 width=136) (actual time=0.909..-1323.123
rows=6066 loops=1)"
" -> Nested Loop
(cost=0.00..240692.30 rows=1 width=136) (actual time=0.882..-1625.883
rows=6066 loops=1)"
" -> Nested
Loop (cost=0.00..240690.71 rows=3 width=104) (actual
time=0.826..-1155.887 rows=6066 loops=1)"
" ->
Nested Loop (cost=0.00..240665.27 rows=48 width=72) (actual
time=0.766..-436.684 rows=7236 loops=1)"
" ->
Index Scan using t_pipe_pkey on t_pipe pipe0_ (cost=0.00..240391.77
rows=48 width=56) (actual time=0.759..796.516 rows=7236 loops=1)"
"
Filter: ((subplan) IS NULL)"
"
SubPlan"
"
-> Nested Loop Left Join (cost=0.00..24.85 rows=1 width=8) (actual
time=0.048..0.077 rows=1 loops=9644)"
"
-> Nested Loop (cost=0.00..16.55 rows=1 width=48) (actual
time=-0.013..0.042 rows=1 loops=9644)"
"
-> Index Scan using t_pipe_pkey on t_pipe p (cost=0.00..8.27 rows=1
width=48) (actual time=-0.021..0.006 rows=1 loops=9644)"
"
Index Cond: (id = $0)"
"
-> Index Scan using t_generic_item_pkey on t_generic_item gi
(cost=0.00..8.27 rows=1 width=16) (actual time=0.057..0.031 rows=1
loops=9644)"
"
Index Cond: (gi.id = $0)"
"
-> Index Scan using uk_projection_weight_grade_endfinish_actor_condtion
on t_projection pr (cost=0.00..8.28 rows=1 width=56) (actual
time=0.005..0.005 rows=0 loops=9644)"
"
Index Cond: ((pr.popipe_id = p.popipe_id) AND (pr.weight_id =
p.weight_id) AND (pr.grade_id = p.grade_id) AND (pr.endfinish_id =
p.endfinish_id) AND (pr.actor_id = gi.actor_id) AND (pr.condition_id =
p.condition_id))"
" ->
Index Scan using t_generic_item_pkey on t_generic_item pipe0_1_
(cost=0.00..5.69 rows=1 width=32) (actual time=-0.068..-0.138 rows=1
loops=7236)"
"
Index Cond: (pipe0_1_.id = pipe0_.id)"
" -> Index
Scan using pk_t_condition on t_condition condition7_ (cost=0.00..0.52
rows=1 width=40) (actual time=0.039..0.004 rows=1 loops=7236)"
"
Index Cond: (condition7_.id = pipe0_.condition_id)"
"
Filter: condition7_.needsprojection"
" -> Index Scan
using pk_t_endfinish on t_endfinish endfinish4_ (cost=0.00..0.52 rows=1
width=40) (actual time=0.045..0.004 rows=1 loops=6066)"
" Index
Cond: (endfinish4_.id = pipe0_.endfinish_id)"
" -> Index Scan using
t_action_pkey on t_state state9_ (cost=0.00..5.77 rows=1 width=16)
(actual time=0.004..0.089 rows=1 loops=6066)"
" Index Cond:
(state9_.id = pipe0_1_.state_id)"
" -> Index Scan using
pk_t_status on t_status status10_ (cost=0.00..0.27 rows=1 width=8)
(actual time=-0.080..-0.037 rows=1 loops=6066)"
" Index Cond:
(status10_.id = state9_.status_id)"
" Filter:
status10_.needsprojection"
" -> Index Scan using pk_t_actor
on t_actor actor5_ (cost=0.00..1.18 rows=1 width=16) (actual
time=0.006..-0.154 rows=1 loops=3174)"
" Index Cond: (actor5_.id =
pipe0_1_.actor_id)"
" -> Index Scan using pk_t_contact on
t_contact contact6_ (cost=0.00..0.36 rows=1 width=524) (actual
time=0.086..0.169 rows=1 loops=3174)"
" Index Cond: (contact6_.id =
actor5_.contact_id)"
" -> Index Scan using pk_t_grade on t_grade
grade3_ (cost=0.00..0.52 rows=1 width=40) (actual time=0.004..-0.156
rows=1 loops=3174)"
" Index Cond: (grade3_.id = pipe0_.grade_id)"
" -> Index Scan using pk_t_weight on t_weight
weight2_ (cost=0.00..0.60 rows=1 width=16) (actual time=0.004..0.006
rows=1 loops=3174)"
" Index Cond: (weight2_.id = pipe0_.weight_id)"
" -> Merge Join (cost=0.00..1428.20 rows=1274 width=532)
(actual time=0.531..17.466 rows=998 loops=3174)"
" Merge Cond: (popipe1_.id = popipe1_1_.id)"
" -> Index Scan using t_po_pipe_pkey on t_po_pipe
popipe1_ (cost=0.00..170.49 rows=2549 width=8) (actual
time=0.335..2.914 rows=2549 loops=3174)"
" -> Index Scan using t_action_pkey on t_state
popipe1_1_ (cost=0.00..1229.50 rows=3641 width=524) (actual
time=0.092..10.843 rows=5727 loops=3174)"
" Filter: (NOT popipe1_1_.spec)"
"Total runtime: 55101.547 ms"
*Before vacuum but after a second run of the query*
"Sort (cost=241636.97..241637.00 rows=10 width=183) (actual
time=2182.240..2182.273 rows=158 loops=1)"
" Sort Key: (sum(pipe0_1_.numpieces)), (sum(pipe0_.numfeet))"
" Sort Method: quicksort Memory: 52kB"
" -> HashAggregate (cost=241636.63..241636.81 rows=10 width=183)
(actual time=1924.839..1925.114 rows=158 loops=1)"
" -> Nested Loop (cost=467.32..241636.26 rows=10 width=183)
(actual time=29.712..1922.303 rows=427 loops=1)"
" -> Hash Join (cost=467.32..241630.12 rows=10 width=175)
(actual time=29.701..1662.213 rows=427 loops=1)"
" Hash Cond: (pipe0_.grade_id = grade3_.id)"
" -> Hash Join (cost=465.37..241628.04 rows=10
width=143) (actual time=29.608..1661.590 rows=427 loops=1)"
" Hash Cond: (pipe0_.endfinish_id =
endfinish4_.id)"
" -> Nested Loop (cost=463.31..241625.85
rows=10 width=111) (actual time=29.531..1660.932 rows=427 loops=1)"
" -> Nested Loop
(cost=463.31..241622.11 rows=10 width=111) (actual time=29.520..1657.811
rows=427 loops=1)"
" -> Nested Loop
(cost=463.31..241610.14 rows=10 width=103) (actual time=29.508..1655.306
rows=427 loops=1)"
" Join Filter:
(pipe0_.popipe_id = popipe1_.id)"
" -> Nested Loop
(cost=0.00..240600.89 rows=12 width=96) (actual time=1.127..307.107
rows=3174 loops=1)"
" -> Nested Loop
(cost=0.00..240594.13 rows=24 width=104) (actual time=0.124..-232.727
rows=6066 loops=1)"
" -> Nested
Loop (cost=0.00..240455.39 rows=24 width=104) (actual
time=0.106..-11.779 rows=6066 loops=1)"
" ->
Nested Loop (cost=0.00..240429.94 rows=48 width=72) (actual
time=0.094..469.592 rows=7236 loops=1)"
" ->
Index Scan using t_pipe_pkey on t_pipe pipe0_ (cost=0.00..240156.45
rows=48 width=56) (actual time=0.089..175.278 rows=7236 loops=1)"
"
Filter: ((subplan) IS NULL)"
"
SubPlan"
"
-> Nested Loop Left Join (cost=0.00..24.85 rows=1 width=8) (actual
time=-0.093..-0.091 rows=1 loops=9644)"
"
-> Nested Loop (cost=0.00..16.55 rows=1 width=48) (actual
time=-0.018..-0.017 rows=1 loops=9644)"
"
-> Index Scan using t_pipe_pkey on t_pipe p (cost=0.00..8.27 rows=1
width=48) (actual time=0.003..-0.024 rows=1 loops=9644)"
"
Index Cond: (id = $0)"
"
-> Index Scan using t_generic_item_pkey on t_generic_item gi
(cost=0.00..8.27 rows=1 width=16) (actual time=0.003..-0.023 rows=1
loops=9644)"
"
Index Cond: (gi.id = $0)"
"
-> Index Scan using uk_projection_weight_grade_endfinish_actor_condtion
on t_projection pr (cost=0.00..8.28 rows=1 width=56) (actual
time=0.003..0.003 rows=0 loops=9644)"
"
Index Cond: ((pr.popipe_id = p.popipe_id) AND (pr.weight_id =
p.weight_id) AND (pr.grade_id = p.grade_id) AND (pr.endfinish_id =
p.endfinish_id) AND (pr.actor_id = gi.actor_id) AND (pr.condition_id =
p.condition_id))"
" ->
Index Scan using t_generic_item_pkey on t_generic_item pipe0_1_
(cost=0.00..5.69 rows=1 width=32) (actual time=0.002..0.003 rows=1
loops=7236)"
"
Index Cond: (pipe0_1_.id = pipe0_.id)"
" -> Index
Scan using pk_t_condition on t_condition condition7_ (cost=0.00..0.52
rows=1 width=40) (actual time=0.037..0.002 rows=1 loops=7236)"
"
Index Cond: (condition7_.id = pipe0_.condition_id)"
"
Filter: condition7_.needsprojection"
" -> Index Scan
using t_action_pkey on t_state state9_ (cost=0.00..5.77 rows=1
width=16) (actual time=-0.039..-0.039 rows=1 loops=6066)"
" Index
Cond: (state9_.id = pipe0_1_.state_id)"
" -> Index Scan using
pk_t_status on t_status status10_ (cost=0.00..0.27 rows=1 width=8)
(actual time=0.045..0.130 rows=1 loops=6066)"
" Index Cond:
(status10_.id = state9_.status_id)"
" Filter:
status10_.needsprojection"
" -> Materialize
(cost=463.31..483.53 rows=2022 width=23) (actual time=0.001..-0.357
rows=998 loops=3174)"
" -> Hash Join
(cost=81.35..461.29 rows=2022 width=23) (actual time=2.740..13.504
rows=998 loops=1)"
" Hash Cond:
(popipe1_1_.id = popipe1_.id)"
" -> Seq Scan on
t_state popipe1_1_ (cost=0.00..330.83 rows=5778 width=15) (actual
time=0.008..8.412 rows=5732 loops=1)"
" Filter:
(NOT spec)"
" -> Hash
(cost=49.49..49.49 rows=2549 width=8) (actual time=2.701..2.701
rows=2549 loops=1)"
" -> Seq
Scan on t_po_pipe popipe1_ (cost=0.00..49.49 rows=2549 width=8) (actual
time=0.010..257.753 rows=2549 loops=1)"
" -> Index Scan using pk_t_actor
on t_actor actor5_ (cost=0.00..1.18 rows=1 width=16) (actual
time=0.003..0.004 rows=1 loops=427)"
" Index Cond: (actor5_.id =
pipe0_1_.actor_id)"
" -> Index Scan using pk_t_contact on
t_contact contact6_ (cost=0.00..0.36 rows=1 width=16) (actual
time=0.004..0.005 rows=1 loops=427)"
" Index Cond: (contact6_.id =
actor5_.contact_id)"
" -> Hash (cost=1.47..1.47 rows=47 width=40)
(actual time=0.059..0.059 rows=47 loops=1)"
" -> Seq Scan on t_endfinish
endfinish4_ (cost=0.00..1.47 rows=47 width=40) (actual
time=0.016..0.033 rows=47 loops=1)"
" -> Hash (cost=1.42..1.42 rows=42 width=40)
(actual time=0.071..0.071 rows=42 loops=1)"
" -> Seq Scan on t_grade grade3_
(cost=0.00..1.42 rows=42 width=40) (actual time=0.029..0.045 rows=42
loops=1)"
" -> Index Scan using pk_t_weight on t_weight weight2_
(cost=0.00..0.60 rows=1 width=16) (actual time=0.605..0.606 rows=1
loops=427)"
" Index Cond: (weight2_.id = pipe0_.weight_id)"
"Total runtime: 2183.176 ms"
*After vacuum -- I see no improvement even after several runs*
"Sort (cost=243084.23..243084.24 rows=5 width=107) (actual
time=36649.839..36649.876 rows=158 loops=1)"
" Sort Key: (sum(pipe0_1_.numpieces)), (sum(pipe0_.numfeet))"
" Sort Method: quicksort Memory: 52kB"
" -> HashAggregate (cost=243084.08..243084.17 rows=5 width=107)
(actual time=36649.104..36649.337 rows=158 loops=1)"
" -> Nested Loop (cost=81.35..243083.89 rows=5 width=107)
(actual time=3.495..36330.092 rows=427 loops=1)"
" -> Nested Loop (cost=81.35..243080.83 rows=5 width=99)
(actual time=3.486..36325.964 rows=427 loops=1)"
" -> Nested Loop (cost=81.35..243078.18 rows=5
width=93) (actual time=3.477..36322.061 rows=427 loops=1)"
" -> Nested Loop (cost=81.35..243075.53
rows=5 width=86) (actual time=3.468..36318.062 rows=427 loops=1)"
" -> Nested Loop (cost=81.35..243073.66
rows=5 width=86) (actual time=3.457..36312.957 rows=427 loops=1)"
" -> Nested Loop
(cost=81.35..243067.67 rows=5 width=78) (actual time=3.441..36305.745
rows=427 loops=1)"
" Join Filter:
(pipe0_.popipe_id = popipe1_.id)"
" -> Nested Loop
(cost=0.00..240558.99 rows=6 width=71) (actual time=1.185..1522.885
rows=3174 loops=1)"
" -> Nested Loop
(cost=0.00..240554.20 rows=17 width=79) (actual time=0.101..440.045
rows=6066 loops=1)"
" -> Nested
Loop (cost=0.00..240455.93 rows=17 width=79) (actual
time=0.090..-131.182 rows=6066 loops=1)"
" ->
Nested Loop (cost=0.00..240430.48 rows=48 width=72) (actual
time=0.083..591.038 rows=7236 loops=1)"
" ->
Index Scan using t_pipe_pkey on t_pipe pipe0_ (cost=0.00..240156.99
rows=48 width=56) (actual time=0.078..541.958 rows=7236 loops=1)"
"
Filter: ((subplan) IS NULL)"
"
SubPlan"
"
-> Nested Loop Left Join (cost=0.00..24.85 rows=1 width=8) (actual
time=0.048..0.077 rows=1 loops=9644)"
"
-> Nested Loop (cost=0.00..16.55 rows=1 width=48) (actual
time=0.067..0.095 rows=1 loops=9644)"
"
-> Index Scan using t_pipe_pkey on t_pipe p (cost=0.00..8.27 rows=1
width=48) (actual time=0.059..0.059 rows=1 loops=9644)"
"
Index Cond: (id = $0)"
"
-> Index Scan using t_generic_item_pkey on t_generic_item gi
(cost=0.00..8.27 rows=1 width=16) (actual time=0.031..0.031 rows=1
loops=9644)"
"
Index Cond: (gi.id = $0)"
"
-> Index Scan using uk_projection_weight_grade_endfinish_actor_condtion
on t_projection pr (cost=0.00..8.28 rows=1 width=56) (actual
time=-0.075..-0.075 rows=0 loops=9644)"
"
Index Cond: ((pr.popipe_id = p.popipe_id) AND (pr.weight_id =
p.weight_id) AND (pr.grade_id = p.grade_id) AND (pr.endfinish_id =
p.endfinish_id) AND (pr.actor_id = gi.actor_id) AND (pr.condition_id =
p.condition_id))"
" ->
Index Scan using t_generic_item_pkey on t_generic_item pipe0_1_
(cost=0.00..5.69 rows=1 width=32) (actual time=-0.069..-0.067 rows=1
loops=7236)"
"
Index Cond: (pipe0_1_.id = pipe0_.id)"
" -> Index
Scan using pk_t_condition on t_condition condition7_ (cost=0.00..0.52
rows=1 width=15) (actual time=-0.139..-0.209 rows=1 loops=7236)"
"
Index Cond: (condition7_.id = pipe0_.condition_id)"
"
Filter: condition7_.needsprojection"
" -> Index Scan
using t_action_pkey on t_state state9_ (cost=0.00..5.77 rows=1
width=16) (actual time=0.132..0.091 rows=1 loops=6066)"
" Index
Cond: (state9_.id = pipe0_1_.state_id)"
" -> Index Scan using
pk_t_status on t_status status10_ (cost=0.00..0.27 rows=1 width=8)
(actual time=0.047..0.091 rows=1 loops=6066)"
" Index Cond:
(status10_.id = state9_.status_id)"
" Filter:
status10_.needsprojection"
" -> Hash Join
(cost=81.35..460.86 rows=2004 width=23) (actual time=0.092..11.790
rows=998 loops=3174)"
" Hash Cond:
(popipe1_1_.id = popipe1_.id)"
" -> Seq Scan on
t_state popipe1_1_ (cost=0.00..330.83 rows=5727 width=15) (actual
time=0.087..8.880 rows=5732 loops=3174)"
" Filter: (NOT spec)"
" -> Hash
(cost=49.49..49.49 rows=2549 width=8) (actual time=2.507..2.507
rows=2549 loops=1)"
" -> Seq Scan on
t_po_pipe popipe1_ (cost=0.00..49.49 rows=2549 width=8) (actual
time=0.015..1.208 rows=2549 loops=1)"
" -> Index Scan using pk_t_actor
on t_actor actor5_ (cost=0.00..1.18 rows=1 width=16) (actual
time=0.011..0.011 rows=1 loops=427)"
" Index Cond: (actor5_.id =
pipe0_1_.actor_id)"
" -> Index Scan using pk_t_contact on
t_contact contact6_ (cost=0.00..0.36 rows=1 width=16) (actual
time=-0.597..-0.597 rows=1 loops=427)"
" Index Cond: (contact6_.id =
actor5_.contact_id)"
" -> Index Scan using pk_t_endfinish on
t_endfinish endfinish4_ (cost=0.00..0.52 rows=1 width=15) (actual
time=0.005..0.005 rows=1 loops=427)"
" Index Cond: (endfinish4_.id =
pipe0_.endfinish_id)"
" -> Index Scan using pk_t_grade on t_grade grade3_
(cost=0.00..0.52 rows=1 width=14) (actual time=0.004..0.005 rows=1
loops=427)"
" Index Cond: (grade3_.id = pipe0_.grade_id)"
" -> Index Scan using pk_t_weight on t_weight weight2_
(cost=0.00..0.60 rows=1 width=16) (actual time=0.005..0.006 rows=1
loops=427)"
" Index Cond: (weight2_.id = pipe0_.weight_id)"
"Total runtime: 36650.609 ms"
Tom Lane wrote:
> Jason Long <mailing.list@supernovasoftware.com> writes:
>
>> I have a query that takes 2 sec if I run it from a freshly restored
>> dump. If I run a full vacuum on the database it then takes 30 seconds.
>>
>
>
>> Would someone please comment as to why I would see a 15x slow down by
>> only vacuuming the DB?
>>
>
> EXPLAIN ANALYZE of both cases might yield some insight.
>
> regards, tom lane
>
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
2008/6/4 Heikki Linnakangas <heikki@enterprisedb.com>:
> Stephen Denne wrote:
>>
>> Hannu Krosing wrote:
>>>
>>> The simplest form of synchronous wal shipping would not even need
>>> postgresql running on slave, just a small daemon which reports when wal
>>> blocks are a) received and b) synced to disk.
>>
>> While that does sound simple, I'd presume that most people would want the
>> guarantee of the same version of postgresql installed wherever the logs are
>> ending up, with the log receiver speaking the same protocol version as the
>> log sender. I imagine that would be most easily achieved through using
>> something like the continuously restoring startup mode of current
>> postgresql.
>
> Hmm, WAL version compatibility is an interesting question. Most minor
> releases hasn't changed the WAL format, and it would be nice to allow
> running different minor versions in the master and slave in those cases. But
> it's certainly not unheard of to change the WAL format. Perhaps we should
> introduce a WAL version number, similar to catalog version?
>
> --
> Heikki Linnakangas
> EnterpriseDB
http://www.enterprisedb.com
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
--
------
Koichi Suzuki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
That would trump all the other suggestions conclusively. Anyone good at
expert systems?
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
As you knew and I doubted, it NOW WORKS!
Whew!
You can bet that I'm keeping this snippet of code handy.
Thank you very much,
Ralph
==================================
On Jun 4, 2008, at 4:34 PM, Stephan Szabo wrote:
> On Wed, 4 Jun 2008, Ralph Smith wrote:
>
>> -- ==========================================
>> good_date := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
>>
>> RAISE INFO 'good_date = %', good_date ;
>>
>> UsecsD := EXTRACT(EPOCH FROM DATE 'good_date') ;
>
> You want something like:
> UsecsD := EXTRACT(EPOCH FROM good_date);
>
> Note the lack of single quotes. You want to use the variable's
> value, not
> a literal string with the value 'good_date'.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>
> * Can we present the config options in a more helpful way (this is 99%
> a documentation problem, not a code problem)?
>
> * Can we build a "configuration wizard" to tell newbies what settings
> they need to tweak?
It's certainly one thing to create an initial postgresql.conf from
scratch after some inquiry, but a different level of problems to deal
with when offering to change the settings. IMHO initial creation isn't
enough, users will feel even more left alone if there are no tools
helping them further. I guess most users will start tweaking after the
server is already running for a while, with some config already in place.
That's when file format and/or APIs come into play. Preserving comments
and/or using them in a wizard isn't too easy with the current format.
Regards,
Andreas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
PostgreSQL IS reporting the correct number of rows inserted into the
master table. 0.
There's some setting in hibernate that will tell it to ignore that
returned number of rows, but I can't remember it right now.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Valérie SCHNEIDER a écrit :
> [...]
> Question: je suis très surpris du temps de sélection
> du cas (3) qui est très grand par rapport aux autres
> select, et pour ne récupérer aucune ligne (car un
> update précédent les a déplacées).
>
Pour être franc, moi aussi. Je viens de faire sept tests (et j'en
prépare un huitième qui s'exécutera pendant que je dormirais), tous ont
à peu de choses près la même durée d'exécution alors que le paramétrage
est bien différent. Je n'ai pas encore tout analysé car j'évite
d'utiliser ma machine pendant le test, mais il semble que j'avais raison
pour les écritures pendant le premier SELECT et l'absence d'écritures
pendant le second. Cependant, ma conclusion me paraît maintenant
suspecte. Je vais continuer à me pencher dessus car j'avoue que je ne
comprends pas du tout ce qu'il se passe (et ça m'ennuie beaucoup :) ).
> Remarque: dans ma description détaillée j'ai utilisé
> entre les requêtes les commandes:
> select * from pg_statio_all_tables
> where relname = 'test_update';
> décrite dans l'article sur PostgreSQL de Juin 2008
> dans Gnu Linux Magazine France, afin des voir l'utilisation
> des caches postgresql et les blocks lus sur le disque dur.
>
De mon côté, j'ai conservé les stats du bgwriter, j'espère en tirer
quelque chose demain.
Bonne nuit.
--
Guillaume.
--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale
On Wed, Jun 4, 2008 at 5:49 PM, Jason Long
<mailing.list@supernovasoftware.com> wrote:
> Yes.
> This is what I do.
> 1. Vacuum full
> 2. Reindex force
> This is done ever night
>
> My users complained about a report being slow.
>
> I grabbed a dump and restored it to my development machine.
>
> The query worked just fine, but not on the production server.
>
> I did a vacuum full and then reran the query. Now it performs just as slow
> as on the production machine. That is the only change I made.
>
> I am in the process of trying to figure out the EXPLAIN ANALYZE for both
> queries, but I have never had to read this output and I am unfamiliar with
> how to find the problem. I will post both here and maybe someone can help
> me out.
>
>
>
> Joshua D. Drake wrote:
>
> On Wed, 2008-06-04 at 17:02 -0500, Jason Long wrote:
>
>
> I have a query that takes 2 sec if I run it from a freshly restored
> dump. If I run a full vacuum on the database it then takes 30 seconds.
>
>
> If you run it a second time after the vacuum full?
>
> Joshua D. Drake
>
>
>
>
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, 2008-06-04 at 17:02 -0500, Jason Long wrote:I have a query that takes 2 sec if I run it from a freshly restored dump. If I run a full vacuum on the database it then takes 30 seconds.If you run it a second time after the vacuum full? Joshua D. Drake
> I am developing an ASP.NET web site in which I will use PostgreSQL
> and such a site is going to be hosted in a hosting company (not in
> my server).
>
> All of the information will be processed through the ASP.NET
> application using ADO.NET.
>
> Does the hosting company need to have PostgreSQL installed in its
> server(s)?
> If not, is there an advantage if the hosting company has PostgreSQL
> installed?
Yes, you want PostgreSQL to be located in the same location as your
web server. In theory, it does not have to be, but the last thing you
want is for an interactive web site to be slow because database access
is over less than optimal networking.
John DeSoi, Ph.D.
--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice
> Would someone please comment as to why I would see a 15x slow down by
> only vacuuming the DB?
EXPLAIN ANALYZE of both cases might yield some insight.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> -- ==========================================
> good_date := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
>
> RAISE INFO 'good_date = %', good_date ;
>
> UsecsD := EXTRACT(EPOCH FROM DATE 'good_date') ;
You want something like:
UsecsD := EXTRACT(EPOCH FROM good_date);
Note the lack of single quotes. You want to use the variable's value, not
a literal string with the value 'good_date'.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
-----Original Message-----
From: pgsql-general-owner@postgresql.org on behalf of Ralph Smith
Sent: Wed 6/4/2008 4:04 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Script errors on run
Same problem, see below
--------------------------------
On Jun 4, 2008, at 3:30 PM, Stephan Szabo wrote:
> On Wed, 4 Jun 2008, Ralph Smith wrote:
>
>> date_string := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
>> RAISE INFO 'date_string = %', date_string ;
>> good_date := to_timestamp(date_string, 'YYYY-MM-DD') ;
>> RAISE INFO 'good_date = %', good_date ;
>
> This seems like alot of extra work, due to the implicit cast from
> date to
> timestamp. I think
> good_date := to_date(year || '-' || month || '-' || day, 'YYYY-MM-DD')
> might work and just be simpler.
>
>> UsecsD := EXTRACT(EPOCH FROM TIMESTAMP 'good_date') ;
>
> If good_date's already a timestamp, I think this should just be:
> EXTRACT(EPOCH FROM good_date)
=========================================
*************************
The code:
DECLARE
year varchar ;
month varchar ;
day varchar ;
pslash1 int ;
pslash2 int ;
year_len int ;
month_len int ;
day_len int ;
date_string varchar ;
good_date date ;
UsecsD double precision ;
Usecs int ;
BEGIN
-- My cleansing code here
-- ==========================================
good_date := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
RAISE INFO 'good_date = %', good_date ;
UsecsD := EXTRACT(EPOCH FROM DATE 'good_date') ;
Usecs := CAST(UsecsD AS INT) ;
RETURN Usecs ;
END ;
$$ LANGUAGE plpgsql ;
-
*************************
Here's what I'm getting now:
smithrn@flexo:~/PL-SQL$ psql -U airburst airburst
Welcome to psql 8.2.6, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
airburst=# \i misc.sql \i library_date.sql
CREATE FUNCTION
CREATE FUNCTION
airburst=# select usecs_from_date('2008-06-04');
INFO: good_date = 2008-06-04
ERROR: invalid input syntax for type date: "good_date"
CONTEXT: SQL statement "SELECT EXTRACT(EPOCH FROM DATE 'good_date')"
PL/pgSQL function "usecs_from_date" line 92 at assignment
airburst=#
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>2008/6/4 David E. Wheeler <david@kineticode.com>:
>>
>> Exactly. The issue is that application developers, who are not DBAs, have no
>> idea how to tune PostgreSQL, and postgresql.conf is daunting and confusing.
>> So they use a different database that's "faster".
>
>do you thing, so any better config can help? It's not possible. And
>you can't tune database without well knowledge of applications that
>use database. Any automatic tools are joy for child. But some default
>PostgreSQL parameters are not optimal.
I think it would be an enourmous help for beginners if they had a "simple"
tuning tool which would tell them which values where altered (and possibly
why) from Postgres' default settings based on some basic information.
Like:
- machine hardware (disk layout, OS, installed memory, etc.)
- application usage (no. of clients, read/write activity, etc)
I don't think that such a tool could tune the database perfectly (or even very
good), but at least people new to Postgres would know where to start looking
for tuning it to their needs.
And I am not speaking about end users running an application that uses
Postgres. I talk about application developers like me that port their
application to use Postgres.
Rainer
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Try doing what was suggested ?
Greg Williamson
Senior DBA
DigitalGlobe
Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
(My corporate masters made me say this.)
-----Original Message-----
From: pgsql-general-owner@postgresql.org on behalf of Ralph Smith
Sent: Wed 6/4/2008 4:04 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Script errors on run
Same problem, see below
--------------------------------
On Jun 4, 2008, at 3:30 PM, Stephan Szabo wrote:
> On Wed, 4 Jun 2008, Ralph Smith wrote:
>
>> date_string := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
>> RAISE INFO 'date_string = %', date_string ;
>> good_date := to_timestamp(date_string, 'YYYY-MM-DD') ;
>> RAISE INFO 'good_date = %', good_date ;
>
> This seems like alot of extra work, due to the implicit cast from
> date to
> timestamp. I think
> good_date := to_date(year || '-' || month || '-' || day, 'YYYY-MM-DD')
> might work and just be simpler.
>
>> UsecsD := EXTRACT(EPOCH FROM TIMESTAMP 'good_date') ;
>
> If good_date's already a timestamp, I think this should just be:
> EXTRACT(EPOCH FROM good_date)
=========================================
*************************
The code:
DECLARE
year varchar ;
month varchar ;
day varchar ;
pslash1 int ;
pslash2 int ;
year_len int ;
month_len int ;
day_len int ;
date_string varchar ;
good_date date ;
UsecsD double precision ;
Usecs int ;
BEGIN
-- My cleansing code here
-- ==========================================
good_date := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
RAISE INFO 'good_date = %', good_date ;
UsecsD := EXTRACT(EPOCH FROM DATE 'good_date') ;
Usecs := CAST(UsecsD AS INT) ;
RETURN Usecs ;
END ;
$$ LANGUAGE plpgsql ;
-
*************************
Here's what I'm getting now:
smithrn@flexo:~/PL-SQL$ psql -U airburst airburst
Welcome to psql 8.2.6, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
airburst=# \i misc.sql \i library_date.sql
CREATE FUNCTION
CREATE FUNCTION
airburst=# select usecs_from_date('2008-06-04');
INFO: good_date = 2008-06-04
ERROR: invalid input syntax for type date: "good_date"
CONTEXT: SQL statement "SELECT EXTRACT(EPOCH FROM DATE 'good_date')"
PL/pgSQL function "usecs_from_date" line 92 at assignment
airburst=#
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Kev!
It is hard to comment on that at all if you don't remember what you
On May 28, 2:27 pm, Kev <kevinjamesfi...@gmail.com> wrote:
> Hi everyone,
>
> I've had this problem maybe three times in the past few months and
> didn't think much of it, but since it happened again yesterday I
> thought I'd report it. Unfortunately I can't remember what I was
> doing at the time, I think just interacting with the main window
> somehow, like clicking on a tree node, possibly. Whatever it was, it
> was a routine action and I was in the middle of just using the program
> normally, when all of the sudden, the pgAdmin window disappears
> completely, along with its taskbar entry, and is no longer running at
> all. In the Task Manager it is not listed. Just instantaneously no
> longer running with no other indication that anything went wrong.
>
> This is under Windows Server 2003 Standard SP2.
(...)
did to trigger the crash. Even harder if you don't mention your
version of pgadmin. (!)
A couple bugs causing crashes like that have been fixed in the past. A
few more for the upcoming release 1.8.4. An upgrade might help.
Regards
Erwin
--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support
If you run it a second time after the vacuum full?
Joshua D. Drake
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> On Wed, 4 Jun 2008, Ralph Smith wrote:
>
>> date_string := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
>> RAISE INFO 'date_string = %', date_string ;
>> good_date := to_timestamp(date_string, 'YYYY-MM-DD') ;
>> RAISE INFO 'good_date = %', good_date ;
>
> This seems like alot of extra work, due to the implicit cast from
> date to
> timestamp. I think
> good_date := to_date(year || '-' || month || '-' || day, 'YYYY-MM-DD')
> might work and just be simpler.
>
>> UsecsD := EXTRACT(EPOCH FROM TIMESTAMP 'good_date') ;
>
> If good_date's already a timestamp, I think this should just be:
> EXTRACT(EPOCH FROM good_date)
=========================================
*************************
The code:
DECLARE
year varchar ;
month varchar ;
day varchar ;
pslash1 int ;
pslash2 int ;
year_len int ;
month_len int ;
day_len int ;
date_string varchar ;
good_date date ;
UsecsD double precision ;
Usecs int ;
BEGIN
-- My cleansing code here
-- ==========================================
good_date := to_date(year||'-'||month||'-'||day , 'YYYY-MM-DD') ;
RAISE INFO 'good_date = %', good_date ;
UsecsD := EXTRACT(EPOCH FROM DATE 'good_date') ;
Usecs := CAST(UsecsD AS INT) ;
RETURN Usecs ;
END ;
$$ LANGUAGE plpgsql ;
-
*************************
Here's what I'm getting now:
smithrn@flexo:~/PL-SQL$ psql -U airburst airburst
Welcome to psql 8.2.6, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
airburst=# \i misc.sql \i library_date.sql
CREATE FUNCTION
CREATE FUNCTION
airburst=# select usecs_from_date('2008-06-04');
INFO: good_date = 2008-06-04
ERROR: invalid input syntax for type date: "good_date"
CONTEXT: SQL statement "SELECT EXTRACT(EPOCH FROM DATE 'good_date')"
PL/pgSQL function "usecs_from_date" line 92 at assignment
airburst=#
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general