Backup All Route53 Hosted Zones and Put on S3 with Debian

Backing up Route53 records and storing them on S3 with Debian.

Installation

Install Python 2.7 and Python pip:

# apt-get update && apt-get install python2.7 python-pip

Install a command line script to manage Route53 records:

# pip install cli53

Install a command line S3 client:

# apt-get install s3cmd

Configuration

Configure boto (change access keys appropriately):

$ echo "[Credentials]" > ~/.boto
$ echo "AWS_ACCESS_KEY_ID=key" >> ~/.boto
$ echo "AWS_SECRET_ACCESS_KEY=secret" >> ~/.boto
$ chmod 640 ~/.boto

Create a new script to get the list of all hosted zones:

$ vim ~/route53.bak.sh

Paste the following:

#!/bin/bash
# written by Tomas (http://www.lisenet.com)
# copyleft free software

BACKUP_PATH="/tmp/route53backups"
ZONES_FILE="all-zones.bak"
DNS_FILE="all-dns.bak"

mkdir -p "$BACKUP_PATH"
cd "$BACKUP_PATH"

# get a list of all hosted zones
cli53 list > "$ZONES_FILE" 2>&1

# get a list of domain names only
sed '/Name:/!d' "$ZONES_FILE"|cut -d: -f2|sed 's/^.//'|sed 's/.$//' > "$DNS_FILE"

# create backup files for each domain
while read -r line; do
        cli53 export --full "$line" > "$line.bak"
done < "$DNS_FILE"

# create an archive to put on S3
tar cvfz "$BACKUP_PATH.tgz" "$BACKUP_PATH"

exit 0

Save the file and make it executable:

$ chmod 0755 ~/route53.bak.sh

Run s3cmd configuration script:

$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: <key>
Secret Key: <secret>

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: ********
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: yes

New settings:
  Access Key: <key>
  Secret Key: <secret>
  Encryption password: ********
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n]
Please wait...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

Create a new bucket:

$ s3cmd mb s3://route53-backup

Upload the backup file to the newly created bucket:

$ s3cmd put /tmp/route53backups.tgz s3://route53-backup

List files in your bucket to make sure the backup was successfully uploaded:

$ s3cmd ls s3://route53-backup
2014-02-03 19:59     29264   s3://route53-backup/route53backups.tgz

You can download the backup file by doing:

$ s3cmd get s3://route53-backup/route53backups.tgz ~
s3://route53-backup/route53backups.tgz -> /home/sandy/route53backups.tgz  [1 of 1]
 29264 of 29264   100% in    0s    53.74 kB/s  done

Make sure you verify integrity to avoid data corruption:

$ md5sum ~/route53backups.tgz /tmp/route53backups.tgz
4bf05a7d52c199c4272bd5bb4c5a8e76  /home/sandy/route53backups.tgz
4bf05a7d52c199c4272bd5bb4c5a8e76  /tmp/route53backups.tgz

3 thoughts on “Backup All Route53 Hosted Zones and Put on S3 with Debian

  1. only this worked for me

    #!/bin/bash
    
    BACKUP_PATH="./route53backups"
    ZONES_FILE="all-zones.bak"
    DNS_FILE="all-dns.bak"
    
    mkdir -p "$BACKUP_PATH"
    cd "$BACKUP_PATH"
    
    # get a list of all hosted zones
    cli53 list > $ZONES_FILE 2>&1
    
    # get a list of domain names only
    sed '/Name:/!d' $ZONES_FILE|cut -d: -f2|sed 's/^.//'|sed 's/.$//'|sed 's/.$//'|sed 's@"@@g' > $DNS_FILE
    
    # create backup files for each domain
    while read -r line; do
            cli53 export --full $line > $line.bak
    done < "$DNS_FILE"
    
    # create an archive to put on S3
    tar cvfz $BACKUP_PATH.tgz $BACKUP_PATH

    exit 0

Leave a Reply

Your email address will not be published. Required fields are marked *