Backing up your WordPress web site can be a complicated process as there are many different parts that require backing up. You could just backup the whole Linux installation, but in itself produces a number of problems when restoring. It is easy to forget when developing a backup procedure that the reason of the backup is to restore it! And it should also be easy and practical.

If you are using a EC2 server for your WordPress server, it is logical to back up to a s3 bucket. If you search, you will find a number of entries all using the same backup procedure. And it does not easy explain what is going on, and if you copy it verbatim, it will probably not work for your situation.

So, how do you backup a wp installation then?

There are three areas which should be backed up:

  1. The /etc/ directory configuration files.
  2. The wp installation directory normally in /var/www/.
  3. The database – normally in MySQL.

Backing up the Configuration Files.

The tar utility is the recommended method to archive files and directories. The format is simple for this purpose:

tar -cf etc.tar /etc/


  • The c option means create a tarball (the name of the archive file).
  • The f option signifies the tarball file name and is followed by the file name.
  • The etc.tar is the tarball file name.
  • The /etc/ is the directory to archive.

This will output an archive file called ‘etc.tar’ in the current working directory. It can be more logical to include a complete path name along with the file name.

Backing up the WordPress Files.

As above for the configuration files, using the tar utility:

tar -cf wp.tar /var/www


  • The wp.tar is the tarball file name.
  • The /var/www/ is the path to the WordPress installation files.


Backing up the Database.

The database is normally MySQL and the utility will already be installed with the MySQL server install. So, the command to back up the database is:

mysqldump --add-drop-table -h<hostname> -u<username> -p<password> database > dbfile.sql


  • The –add-drop-table option will add extra commands to the sql file to drop the tables before creating them again.
  • The -h option specifies the hostname and is followed by the host name.
  • The -u option specifies the user name and is followed by the user name.
  • The -p option specifies the password for the user name and is followed by the password.
  • The database is the name of the database to dump.
  • The > symbol is a command line option to direct the output of the mysqldump to a file.
  • The dbfile.sql is the name of the dump file.

The host name, user name, password and database name are all available in the WordPress configuration file wp-config.php in the WordPress directory.

Combining and Compressing.

Next, the three files have to be archived together and compressed.

tar -cjf backupfiles.bz2 file1 file2 file3


  • The option j (or –bzip2) compresses the tarball using the bzip2 standard.

Adding all of the above into a script to run:

  #! /usr/bin/env bash    # Do not use unset variables  set -o nounset  # Exit on ANY form of error  set -o errexit    #Configuration Variables  etcPath="/etc/"  wpPath="/var/www/"  tmpPath="/tmp/"    # Populate the following from the wp.config file  dbHostName=""  dbDatabaseName=""  dbUserName=""  dbUserPwd=""    today=`date +"%Y-%m-%d"`    workingPath="$tmpPath"/backup-"$today"  etcTarball="$workingPath"/etc.tar  wpTarball="$workingPath"/wp.tar  dbDumpFile="$workingPath"/db.sql  backupFile="$workingPath"/backup-"$today".bz2  # Temp save in tmp directory  mkdir -p "$workingPath" || { echo "Cannot make temp directory"; exit 1; }  cd "$workingPath"    echo "Backing up etc."  tar -cf "$etcTarball" "$etcPath"    echo "Backing up WordPress."  tar -cf "$wpTarball" "$wpPath"    echo "Dumping Database."  mysqldump --add-drop-table \      -h"$dbHostName" \      -u"$dbUserName" \      -p"$dbUserPwd" \       "$dbDatabaseName" > "$dbDumpFile"    echo "Combining and Compressing."  tar -cjf "$backupFile" "$etcTarball" "$wpTarball" "$dbDumpFile"  

The result is a file backup “backup-date.bz2” in the directory /tmp/backup-date/ ready to be transferred to offline storage. In this case, the offline storage will be a AWS s3 bucket.

So, now we have to add the AWS s3 storage access details and procedures.

Sending to AWS s3.

There is fourpieces of information needed to use a s3 bucket:

  1. The s3 Endpoint.
  2. The s3 Bucket Name.
  3. The s3 Bucket Access Key ID.
  4. The s3 Bucket secret Access Key.

The AWS s3 Endpoint.

The endpoint can be in two different formats depending on the region of your s3 service. The simplified is really only for backwards compatibility/legacy reasons for the US East region. Even if you are using this region, I would recommend using the region in the endpoint So pick your endpoint address e.g.

See for complete documentation.

There are also two different types of requests: path-style and virtual-style. See for complete description.

The AWS s3 Bucket Name.

Create a s3 bucket with a suitable name. See for complete documentation on s3.

The AWS s3 User ID and Secret.

Create a new user to access ONLY the s3 bucket. Do not add a password, but the user Access ID Key and the User Access Secret will be needed. The following links provide the details for creating this user.

Warning: The user should only be able to access the s3 bucket for backing up the server.


For this user, under the Security Credentials tab, Create Access Key. Remember to save the generated file to a SAFE place. The file contains the User Name, Access Key ID and the Secret Access Key. The final two keys will be needed to access the bucket.

Under the Permissions tab, pull down the Inline Polices section and click on Create User Policy. Select Custom Policy. Add a suitable name. Using the template below, change the bucket information.

{      "Statement": [          {              "Effect": "Allow",              "Action": "s3:ListAllMyBuckets",              "Resource": "arn:aws:s3:::*"          },          {              "Effect": "Allow",              "Action": "s3:*",              "Resource": [                  "arn:aws:s3:::Bucket-Name-Here",                  "arn:aws:s3:::Bucket-Name-Here/*"              ]          }      ]  

Cut and paste or retype into the Policy Document field and then select Valiate Policy. If all is well, Apply Policy.



Final Complete Script.

We now have all the information needed to complete a script to write to the AWS s3 bucket:

# Change the following to match your information  s3EndPoint=""  s3BucketName=""  s3AccessKey=""  s3AccessSecret=""  

To connect to the s3 bucket and transfer the file, the utility curl is suitable and simple to use.