> Stephen Frost <sfrost@snowman.net> writes:
> > I dislike, and doubt that I'd use, this approach. At the end of the
> > day, it ends up processing the same (very large amount of data) multiple
> > times.
>
> Well, that's easily avoided: just replace the third step by restoring
> directly to the target database.
>
> pg_restore --schema-before-data whole.dump >before.sql
> edit before.sql
> pg_restore --schema-after-data whole.dump >after.sql
> edit after.sql
> psql -f before.sql target_db
> pg_restore --data-only -d target_db whole.dump
> psql -f after.sql target_db
This would depend on the dump being in the custom format, though I
suppose that ends up being true for any usage of these options. I've
never really been a fan of the custom format, in large part because it
doesn't really buy you all that much and makes changing things more
difficult (by having to extract out what you want to change, and then
omit it from the restore).
I can see some advantage to having the entire dump contained in a single
file and still being able to pull out pieces based on before/after.
Should we get a binary format which is much faster, I could see myself
being more likely to use pg_restore. Same for parallelization or, in my
fantasies, the ability to copy schema, tables, indexes, etc, in 'raw' PG
format between servers. Worse than having to vi an insanely large file,
or split it up to be able to modify the pieces you want, is having to
rebuild indexes, especially GIST ones. That's another topic though.
Thanks,
Stephen
No comments:
Post a Comment