Yes, we’re using Subversion. I know that distributed version control systems (e.g. Git) are cool and we might get there sometime, but for misc reasons we’re still using SVN. For the records, some of us are using GIT-SVN and we’re working and releasing from trunk (part of a the lean startup methodology) so the branching merging is less of an issue.
I did some work to migrate our repository and spent some time to setup our SVN repo. Here are some bits and pieces I collected from scattered sites or made up myself to facilitate the SVN backup. Hope it will help anyone starting from scratch.
For the backup I’m using the great svnbackup script. Here are parts of our script (launched by crontab):
now=$(date +%F) svnbackup.sh --out-dir $OUT_DIR --file-name $FILE_NAME -v $REPO_LOCATION RETVAL=$? if [ $RETVAL -ne 0 ]; then mail -s "ERROR: SVN backup on $now" $KACHING_OPS exit 1 fi
Then the script sync’s up the backup directory with S3 and verifies that the content of the last_saved file matches the last revision from SVN which it gets using
last_revision=$(svn -q --limit=1 log https://chb2.kcprod.info:4242/svn/backend | head -2 | tail -1 | cut -c 2-6)
Backup is not enough, we must constantly test that when the time comes we’ll be able to use it. Therefor we added a script, triggered by Nagios, to run on another machine and try to do a full repo rebuild from scratch.
The first thing the script is doing is to brute force clean up the repo:
rm -rf $SVN_REPO svnadmin create $SVN_REPO
Then do a S3 sync to get all the backup files and load the files into the svn repo in the right order:
for file in $(ls $SVN_BACKUP_FILES_DIR/*.bzip2 | sort -t '-' -k 4 -n) do bzip2 -dc $file | svnadmin load $SVN_REPO done
Next step is getting few revisions and checking that their attributes (e.g. comments) match in both live and backup test repos.
Just because I’m paranoid we’re also have an svn sync on an SVN slave server our second data center where every commit is backed-up on the fly and some of our systems (e.g. WebSVN) are reading from it.