12 Feb 2025
Deploying Static Websites via S3: The Secure & Easy Way
Working in a startup environment usually involves deploying websites in some shape or form—whether that's a company website, a landing page for a new app, or a blog for content distribution. For SEO and performance reasons, most of these sites are ideally static, pre-generated HTML. I usually achieve this using Vue and Nuxt, but there are plenty of options.
Among the myriad of static hosting platforms, my favorite combination is AWS CloudFront and S3 for a number of reasons:
- It's cheap
- It's performant
- Availability is (almost) never an issue
Most importantly, since I use AWS for most of my other projects, code, and infrastructure, it keeps all of my software in one place. This alone is almost enough to make it worth it to me.
I recently started working on a new project, Siloed, and needed a landing page—which led me to refine some existing Terraform and hosting infrastructure. There are plenty of guides out there that detail how to deploy a static website to S3 and serve content via CloudFront, but almost all suffer from two major design flaws:
- They require a public S3 bucket—this is almost never a good idea if avoidable and introduces a number of security concerns.
- They involve complicated routing setups—you can't use S3 to load a default index file other than the root object; i.e., if you have a page at
/contact
, you need to request/contact/index.html
because/contact
on its own is an S3 prefix, not a file.
You'll often see developers circumventing issue #2 by deploying a Lambda@Edge function that remaps the URL. AWS itself seems to have advocated for this—there is an AWS blog on the subject here. However, this involves deploying a Lambda function, which significantly complicates the entire setup.
In this article, I'm going to solve both of these problems. The result is going to be a private S3 bucket serving content via an AWS CloudFront Origin Access Identity (OAI), with an AWS CloudFront function to fix the loading of the index.html
files. All of this is going to be bundled neatly into a single, reusable Terraform module, which can be used in the following manner:
module "website" {
source = "./modules/aws-static-site"
name = "example-site"
dns_config = {
hosted_zone_name = "example.com"
subdomain = "blog.example.com"
}
}
Website Content
Before you get started, make sure you have generated your static HTML content so that you can upload it to your S3 bucket once all the resources are created. This is beyond the scope of this article, so I won’t cover it here. However, if you are looking for a stack to generate your static websites, I would recommend Vue and Nuxt – I use this combination for almost all my static sites, and it works great.
If you do end up using Nuxt, make sure you read the sections on deploying static content here. You want to ensure you do this correctly and don’t accidentally build and deploy your site as an SPA, as this will be detrimental to your SEO. You’ll also want to have the nuxtjs/seo module installed and configured, as it handles a number of SEO-related tasks automatically.
No matter what you use, have your static content at hand so that you can upload it to the S3 bucket at the end.
DNS Config
In addition to setting up the S3 bucket and CloudFront distribution, the Terraform module is going to configure ACM certificates and Route 53 DNS records to serve the website on a provided domain or subdomain. This requires a publicly accessible Route 53 hosted zone that corresponds to the domain or subdomain you want to use.
For example, if you plan to deploy your site at foo.example.com
, you need to ensure that you have a public Route 53 hosted zone for the apex domain (example.com
) or for the delegated subdomain (foo.example.com
). I’m not going to create these resources in this article, so make sure you already have them configured.
If you don’t manage your DNS via AWS, you can simply omit any of the Route 53 resources and configure the DNS yourself separately.
Terraform - Providers
The first thing to define for the module is the required Terraform providers. One somewhat annoying caveat when using AWS CloudFront in this manner is that the ACM certificate required to serve the site on a domain needs to be in us-east-1
, which is typically not the region in which I’m working.
Practically, this means that two AWS providers need to be configured: one for us-east-1
to handle the creation of the certificate, and another for whichever region you are working in. This can be done by defining a configuration alias within the module. In this case, I’m defining an alias called aws.acm
, which will need to be passed down to the module when it’s invoked. More on that later. For now, define the following in a versions.tf
file.
# versions.tf
terraform {
required_providers {
aws = {
version = "~> 5.0"
configuration_aliases = [
aws.acm
]
}
}
}
Note that if your default region is already us-east-1
you can omit the configuration alias.
Terraform - Variables
Since I’m packaging everything into a reusable Terraform module, I’m going to define a variables.tf
file with a couple of input variables.
# variables.tf
variable "name" {
description = "The name of the static site. Used for naming select resources."
type = string
}
variable "dns_config" {
description = "The DNS configuration for the static site."
type = object({
hosted_zone_name = string
subdomain = string
})
}
The name
variable will be used to name some of the AWS resources and should be unique. dns_config
, meanwhile, will be used to set up the ACM certificate and Route 53 DNS records. Note that the dns_config.subdomain
parameter requires the entire subdomain (e.g., foo.example.com
) and not just foo
.
Terraform - S3 Bucket Setup
Next is the S3 bucket. The configuration is pretty standard; nothing complicated about it.
# s3.tf
resource "aws_s3_bucket" "this" {
bucket = var.dns_config.subdomain
}
resource "aws_s3_bucket_ownership_controls" "this" {
bucket = aws_s3_bucket.this.id
rule {
object_ownership = "BucketOwnerEnforced"
}
}
resource "aws_s3_bucket_public_access_block" "this" {
bucket = aws_s3_bucket.this.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
As promised, the AWS bucket itself is private, with block_public_acls
set to true. I’m also setting BucketOwnerEnforced
as the object ownership for an added layer of security. One thing to note is that I’m using the subdomain as the name for the S3 bucket. This is not required, but I find it a useful practice as it makes it easy to locate the S3 bucket for your site—especially when you have multiple sites deployed in a single AWS account.
Terraform - CloudFront Setup
With the S3 bucket configured, the CloudFront resources can be defined. The following resources are required:
- A CloudFront function to handle loading of non-root index.html files.
- A CloudFront Origin Access Identity (OAI) to authenticate against the private S3 bucket.
- IAM policies to grant the OAI access to the S3 bucket.
- A CloudFront distribution to serve content.
The first resource to configure is the CloudFront function. If you need a primer on CloudFront functions and what they do, see the official AWS docs. In this case, I’m using the cloudfront-js-1.0
runtime to run a JavaScript snippet with each request made to the CloudFront distribution. The snippet intercepts the request and executes the following logic:
- If the request is for the root ("/"), leave the request unchanged.
- If the URI contains a dot, assume the request is for a specific file and leave it unchanged.
- Otherwise, append index.html to the end of the URI.
This ensures that when a request is made to /{path}
, CloudFront modifies the URI to /{path}/index.html
before requesting the content from the S3 bucket, ensuring that CloudFront retrieves the actual HTML file rather than attempting to list the S3 prefix. All other URI patterns remain unmodified.
The Terraform for the CloudFront function is given below.
# cloudfront.tf
resource "aws_cloudfront_function" "append_index_html" {
name = "${var.name}-append-index-html"
runtime = "cloudfront-js-1.0"
publish = true
code = <<-EOF
function handler(event) {
var request = event.request;
var uri = request.uri;
// If the request is for the root ("/"), do nothing
if (uri === "/") {
return request;
}
// If the URI contains a dot, assume it's a file and do nothing
if (uri.includes(".")) {
return request;
}
// If the URI doesn't end with "/", add it
if (!uri.endsWith("/")) {
uri += "/";
}
// Append index.html
request.uri = uri + "index.html";
return request;
}
EOF
}
With the function in place, it’s time to define the rest of the CloudFront resources. (Ignore the reference to the ACM certificate within the viewer_certificate
block for now; this will be defined shortly.)
# cloudfront.tf
resource "aws_cloudfront_origin_access_identity" "oai" {}
data "aws_iam_policy_document" "this" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.this.arn}/*"]
principals {
type = "CanonicalUser"
identifiers = [aws_cloudfront_origin_access_identity.oai.s3_canonical_user_id]
}
}
}
resource "aws_s3_bucket_policy" "this" {
bucket = aws_s3_bucket.this.id
policy = data.aws_iam_policy_document.this.json
}
resource "aws_cloudfront_distribution" "this" {
enabled = true
default_root_object = "index.html"
aliases = [var.dns_config.subdomain]
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.main.arn
ssl_support_method = "sni-only"
}
origin {
domain_name = aws_s3_bucket.this.bucket_regional_domain_name
origin_id = "nuxtS3Origin"
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
}
}
default_cache_behavior {
target_origin_id = "nuxtS3Origin"
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
default_ttl = 3600
max_ttl = 86400
min_ttl = 0
function_association {
event_type = "viewer-request"
function_arn = aws_cloudfront_function.append_index_html.arn
}
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
}
A couple of things to note here:
- The only principal given access to the S3 bucket is the OAI. This is intentional and ensures that nothing but the OAI has access to the bucket.
- The
function_association
block within thedefault_cache_behavior
block is used to link the previously defined CloudFront function with the distribution. - The subdomain is added as an alias to the distribution. This is required to associate a domain with the distribution so that it can be accessed on your subdomain.
Terraform - Route53 Setup
Finally, the ACM certificate and DNS records need to be created to route traffic from the configured domain/subdomain to the CloudFront distribution. This will involve creating the following resources:
- An ACM certificate linked to the specified subdomain.
- CNAME records for certificate validation.
- An A record for routing traffic from the specified subdomain to the CloudFront distribution.
The Terraform configuration looks like the following:
# route53.tf
data "aws_route53_zone" "main" {
name = var.dns_config.hosted_zone_name
}
resource "aws_acm_certificate" "main" {
provider = aws.acm
domain_name = var.dns_config.subdomain
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cname-record" {
for_each = {
for dvo in aws_acm_certificate.main.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.main.zone_id
}
resource "aws_acm_certificate_validation" "main" {
provider = aws.acm
certificate_arn = aws_acm_certificate.main.arn
validation_record_fqdns = [for record in aws_route53_record.cname-record : record.fqdn]
}
resource "aws_route53_record" "a-record" {
zone_id = data.aws_route53_zone.main.zone_id
name = var.dns_config.subdomain
type = "A"
alias {
name = aws_cloudfront_distribution.this.domain_name
zone_id = aws_cloudfront_distribution.this.hosted_zone_id
evaluate_target_health = true
}
}
Packaging Everything Up
With all the Terraform files in place, all that’s left to do is arrange the files into a module.
├── main.tf
├── modules
│ └── aws-static-site
│ ├── cloudfront.tf
│ ├── route53.tf
│ ├── s3.tf
│ ├── variables.tf
│ └── versions.tf
└── provider.tf
The providers.tf
file will contain the provider definitions, including the alias provider defined earlier for us-east-1
. It looks something like the following:
# providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.7.1"
}
provider "aws" {
region = "eu-west-2"
}
# only required if your default region is NOT us-east-1
provider "aws" {
region = "us-east-1"
alias = "acm"
}
Finally, the main.tf
file contains the module invocation.
module "website" {
source = "./modules/aws-static-site"
name = "example-site"
dns_config = {
subdomain = "example.com"
hosted_zone_name = "foo.example.com"
}
# only required if your default region is NOT us-east-1
providers = {
aws = aws
aws.acm = aws.acm
}
}
Uploading Static HTML Content
Once all the resources are in place, the static HTML content can be pushed to the S3 bucket to be served via AWS CloudFront. The specific folder structure you use will depend on the framework you are using to generate your content. However, the index.html
file that serves the root of your application must be at the root level. If you’re using Nuxt, the output from the .output/public
folder generated by running nuxt generate
is what needs to be uploaded to the S3 bucket.
The AWS CLI can be used to push the files using:
$ aws s3 cp {output-dir} s3://{bucket-name} --recursive
where output-dir
is the path to the directory containing the index.html
file and bucket-name
refers to the S3 bucket created earlier. Once the files have been uploaded to S3, you should be able to access them at the specified subdomain.
Final Steps
That's its for today folks. If you enjoyed this article, consider following us on LinkedIn to get updates on future content and what we are working on, or send me a connection request directly. Don't forget to check out the rest of our content and stay tuned for weekly blog posts on tech and business.