We also send our zfs dumps over BBCP as well. Once the incremental snapshot is completed you can turn of the original and cut over to the new copy and your "offline downtime" is kept to a minimum.The incremental snapshot only includes the (much smaller) change-set since the first, so it goes through relatively quick. Make a second snapshot, and send it as an incremental. ![]() Make a ZFS snapshot, and transfer to the new pool on the new machine. ![]() If we have a "live" filesystem that may take hours (or days) to copy even when going full-blast. We use ZFS pools that we can always just "add" more disk space to. Normally, we try real hard to avoid having to move suff around. $ bbcp -s 8 -w 64M -N io 'tar -cO srcdirectory' desthostname:'tar -x -C destdir' We can usually get 90%+ line-rate provided we can keep the pipe fed. It's a buffered parallel ssh that really screams. You can also try using the BBCP command to do your transfer. To transfer files between servers, you can use fast-archiver with ssh, like this: ssh "cd /db fast-archive -c data -exclude=data/\*.pid" | fast-archiver -x Tar: /db/data/base/16408/12445.2: file changed as we read it Tar: Removing leading `/' from member names $ time tar -cf - /db/data | cat > /dev/null ![]() $ time fast-archiver -c -o /dev/null /db/dataġ008.92user 663.00system 27:38.27elapsed 100%CPU (0avgtext+0avgdata 24352maxresident)kĠinputs+0outputs (0major+1732minor)pagefaults 0swaps tar on a backup of over two million files fast-archiver takes 27 minutes to archive, vs. I wrote an open source tool called fast-archiver that is faster than tar for these scenarios: it works faster by performing multiple concurrent file operations. When copying a large number of files, I found that tools like tar and rsync are more inefficient than they need to be because of the overhead of opening and closing many files.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |