S3

Why object storage is getting exciting?

Last year had many interesting developments and one of that has been object storage. For those unaware, object storage is de-facto cloud storage which stores data as objects instead of file system architecture. This gives the option of simple plug-and-play horizontal scalability. It became popular when Amazon Web Services (AWS) launched S3. The idea was straightforward - pay-as-go storage with a few cents/GB/month charge to store data and a few cents/GB to egress data. No need to plan storage, no need to plan hard disk, storage servers, or rack capacity but a simple pay-as-you-go opex cost. Plus top tier cloud players do offer redundancy of data. The API replies with “success” on uploads only when data is replicated to multiple datacenters.

Updates from life, blog and more

Some updates from personal life…

I have joined Fremont based IP backbone & colocation provider - Hurricane Electric and would be working on some cool things at AS6939. :)  


Updates on blog…

I have changed theme and entire look of blog and re-designed it with new plugins, more tweaking etc. As of now blog has more cleaner while theme which gives more space for posting, improved security with some ACLs, forced HTTPS to avoid telcos from injecting iframe in readers on 3G networks (which is very bad and worrying). Also, with use of bunch of plugins, now my I am hosting all static media content on AWS S3 to avoid local storage on server, it’s backup etc. Running it on AWS S3 with Geo replication + Cloudfront for CDN/efficient delivery made much more sense. Though sad that there’s no easy way for integration of Google Cloud storage with wordpress. S3 being more mature product makes it easier.

Dumb script for Picasaweb backup on Linux server & Amazon S3

Just wrote a quick script to pull dump of Picasaweb albums backup on my server & further to Amazon S3. Overall I have good trust on Google for data but it’s always a poor idea to leave all eggs in single bucket.

OK here’s the script (poorly written code. Literally spent 10mins on this, thus suggestions to improve my coding are more then welcome!)

 #!/bin/bash

Destination=<PUT YOUR DESTINATION HERE!>
google picasa list-albums | cut -d"," -f1 » $Destination/tmp/album_list.txt

cat $Destination/tmp/album_list.txt | while read album

do
          google picasa get “$album” $Destination/tmp
done

FileName=PicsBackup-`date ‘+%d-%B-%Y’`.tar
tar -cpzf $Destination/$FileName $Destination/tmp
gpg –output $Destination/$FileName.pgp -r –always-trust –encrypt $Destination/$FileName
s3cmd put $Destination/$FileName.pgp s3://YOUR-AWS-S3-BUCKET-ADDRESS-HERE

rm -r $Destination/tmp/*
rm $Destination/$FileName
rm $Destination/$FileName.pgp

How to use

Simply download Google Cli scripts, and get your Google account working with the installed stack. Also if you need Amazon S3 backup support then install & configure s3cmd. Once you have both of these configured with your account, simple give executable bit to the script & run!