Tuesday, July 8, 2008

[SQL] Best practice for long-lived journal tables: bigint or recycling IDs?

Hello,

I have some tables that continually collect statistics, and then over time are
pruned as the stats are aggregated into more useful formats.

For some of these tables, it it is fore-seeable that the associated sequences
would be incremented past the max value of the "int" type in the normal course
of things.

I see two options to prepare for that:

1. Convert the primary keys to "bigint", which should be good enough "forever".
I suppose there would some minor storage and performance penalty.

2. Reset the sequence at some point. There would no "collisions", because the
older rows would have long been pruned-out. I suppose there is an improbable
edge case in which we restore some old data from tape and then are confused
because some new data has the same IDs, but as I said, these tables are used as
temporary holding locations, not permanent storage.

Both options have some appeal for me. What have others done?

Mark

--
. . . . . . . . . . . . . . . . . . . . . . . . . . .
Mark Stosberg Principal Developer
mark@summersault.com Summersault, LLC
765-939-9301 ext 202 database driven websites
. . . . . http://www.summersault.com/ . . . . . . . .

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

No comments: