> Application requirement. We need to do something for each row retrieved from BIG
> and the something is expensive. We do the scan slowly (30 second sleep inside
> the loop) to amortize the cost.
Then do the processing in separate transactions like this (in pseudocode):
$last_id = -1;
do {
begin transaction;
$result = select * from bigtable
where id>$last_id
and processed=false
order by id limit 1;
if ( empty($result) ) {
rollback;
break;
}
do_something_expensive_with($result[0]);
update bigtable set processed=true where id=$result[0][id];
commit;
sleep 30;
} while (true);
Always avoid long running transactions. This is recommended for any
transactional database.
Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
Winnie the Pooh
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
No comments:
Post a Comment