Moving to Amazon S3
A couple of years ago I wrote about how I deployed this blog to Heroku. Since then everything has changed; it is now served from Amazon S3.
Why change?
It was prompted by Heroku's price changes last year. I think their new pricing model is completely reasonable, and their "free" product still generous, but Heroku can get expensive. This low traffic site will cost very little to host with Amazon.
It seemed like a no-brainer to try it out, and this is how I did it.
Setting up Static Website Hosting
Amazon offer Static Website Hosting as a feature of S3. This serves a bucket over HTTP at a URL that Amazon gives you. The first step was to create a bucket to host the files. This is easily done through the Amazon S3 console. I called mine jordanelver.co.uk, not because I'm self-obsessed, but because that is the name of this website.
Configuring the bucket to serve the site was straight forward. Once I'd enabled
Static Website Hosting in the bucket properties, the site was available at
jordanelver-co-uk.s3-website-eu-west-1.amazonaws.com
. Of course, I hadn't
transferred anything to Amazon at this point, and the permissions were not yet
set, so nothing but an error message was served at this point.
Permissions
Setting the correct permissions was by far the most confusing part of the whole process. There are two parts to this 1) creating a user to interact with S3; and 2) setting the correct bucket permissions.
I first setup an IAM user that had access to the correct bucket. This allowed me to upload the files to Amazon. These credentials are used later as part of the automated deployment.
The second part is allowing the files in S3 to be served over HTTP to the public. By default files in S3 are not publically available, for obvious reasons. I used the following policy to allow the files to be read. This is added through the bucket properties.
1
2
3
4
5
6
7
8
9
10
11
12
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::jordanelver.co.uk/*"
}
]
}
Deploying with Middleman
Moving away from Heroku meant losing the ability to deploy using git push
so
I went looking for other methods. Luckily, Middleman has many extensions
available to help with automating deployment. I decided on
Middleman::S3Sync as it only transfers files that have changed, rather
than transferring everything each time, which seems like a good thing.
I added gem 'middleman-s3_sync'
to my Gemfile
, ran bundle
and added the
following config to my config.rb
file.
1
2
3
4
5
6
7
8
9
10
11
12
activate :s3_sync do |s3_sync|
s3_sync.delete = true
s3_sync.after_build = false
s3_sync.prefer_gzip = true
s3_sync.path_style = true
s3_sync.reduced_redundancy_storage = false
s3_sync.acl = "public-read"
s3_sync.encryption = false
s3_sync.version_bucket = false
s3_sync.index_document = "index.html"
s3_sync.error_document = "404.html"
end
The sync process needs to know what the bucket name is, the Amazon region, and
the user credentials to upload the files. There are several supported
methods and I choose to use a .s3_sync
file.
1
2
3
4
5
---
bucket: jordanelver.co.uk
region: eu-west-1
aws_access_key_id: <ACCESS_KEY_ID>
aws_secret_access_key: <SECRET_ACCESS_KEY>
Deploying the site is now as simple as first building with middleman build
and
then syncing with middleman s3_sync
.
DNS changes
The final step is to hook up the domain name to the bucket. I use DNSimple
to host my DNS which makes the changes required simple. I setup an ALIAS
record to point from jordanelver.co.uk
to
jordanelver-co-uk.s3-website-eu-west-1.amazonaws.com
and the site is now
served at this domain name.
Conclusion
I'm very happy with this setup so far. Once you've set it up the first time, the process can easily be replicated to add additional sites. I can see that I'll continue to use this method of hosting for my "holding page" and static sites needs.