Dumb script for Picasaweb backup on Linux server & Amazon S3

Just wrote a quick script to pull dump of Picasaweb albums backup on my server & further to Amazon S3. Overall I have good trust on Google for data but it’s always a poor idea to leave all eggs in single bucket.

OK here’s the script (poorly written code. Literally spent 10mins on this, thus suggestions to improve my coding are more then welcome!)

 #!/bin/bash

Destination=<PUT YOUR DESTINATION HERE!>
google picasa list-albums | cut -d"," -f1 » $Destination/tmp/album_list.txt

cat $Destination/tmp/album_list.txt | while read album

do
          google picasa get “$album” $Destination/tmp
done

FileName=PicsBackup-`date ‘+%d-%B-%Y’`.tar
tar -cpzf $Destination/$FileName $Destination/tmp
gpg –output $Destination/$FileName.pgp -r –always-trust –encrypt $Destination/$FileName
s3cmd put $Destination/$FileName.pgp s3://YOUR-AWS-S3-BUCKET-ADDRESS-HERE

rm -r $Destination/tmp/*
rm $Destination/$FileName
rm $Destination/$FileName.pgp

How to use

Simply download Google Cli scripts, and get your Google account working with the installed stack. Also if you need Amazon S3 backup support then install & configure s3cmd. Once you have both of these configured with your account, simple give executable bit to the script & run!


Code logic

I couldn’t find an easy to way to download entire album base from Picasa. There seems to be some bug with Google Cli tools in directory creation and hence google picasa get .* . fails right after 1st album pull up. Google Cli offers pullup of album names (along with hyperlinks) with list-albums parameter. Thus first part of code is to pull that list and cut the first part of output using comma as delimiter. Next. the output is taken on a txt file which is read line by line in a loop. And the loop has simple code for download of each album one by one. Once download is completed, tar runs to create compress archive followed by gpg to encrypt the tar. This encrypted file is then uploaded to Amazon S3 using s3cmd tool and lastly all downloaded files are just deleted!

On Amazon S3 I have a bucket expiry rule which takes care of rotation and removal of old data. I can spend few more mins to make it more complex but this one just works! ;)

Moral: My programming is crappy, no doubt!