Wednesday, June 11, 2008

Re: [GENERAL] array column and b-tree index allowing only 8191 bytes

Hi Alvaro,

thanks for the hint. I've since experimented with gin and gist and did a
small pgbench custom script test.

Recalling from my previous message, the int[] on a row can have a
maximum of 5000 values. From here I judged gin to be the best option but
inserting is really slow. The test was performed on a small EC2
instance. I raised maintenance_work_mem to 512MB but still inserting 50K
rows takes more than an hour.

I also tested gist, inserts run quickly but running pgbench with 100
clients, each making 10 selects on a random value contained in the int[]
takes the machine load to values such as 88 which is definately a no go.

What, if any, would be the recommended options to improve this
scenario? Not using intarray? :-)

Cheers,
Celso

On Sáb, 2008-06-07 at 12:38 -0400, Alvaro Herrera wrote:
> Celso Pinto wrote:
>
> > So my questions are: is this at all possible? If so, is is possible to
> > increate that maximum size?
>
> Indexing the arrays themselves is probably pretty useless. Try indexing
> the elements, which you can do with the intarray contrib module.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

No comments: