AWS CloudFront Deep Dive

Table of Contents

Introduction

For the intro I’m just citing AWS documentation and elaborating more on the benefits of using a CDN, if you already know what a CDN is, what the benefits of a CDN are and how it works you can skip this section.

What is a CDN?

https://aws.amazon.com/what-is/cdn/#seo-faq-pairs#what-is-a-cdn

A content delivery network (CDN) is a network of interconnected servers that speeds up webpage loading for data-heavy applications. CDN can stand for content delivery network or content distribution network. When a user visits a website, data from that website’s server has to travel across the internet to reach the user’s computer. If the user is located far from that server, it will take a long time to load a large file, such as a video or website image. Instead, the website content is stored on CDN servers geographically closer to the users and reaches their computers much faster.

What are the benefits of using a CDN?

https://aws.amazon.com/what-is/cdn/#seo-faq-pairs#what-are-the-benefits-of-cdns

Content delivery networks (CDNs) provide many benefits that improve website performance and support core network infrastructure. For example, a CDN can do the following tasks:

Reduce page load time Website traffic can decrease if your page load times are too slow. A CDN can reduce bounce rates and increase the time users spend on your site.

Reduce bandwidth costs Bandwidth costs are a significant expense because every incoming website request consumes network bandwidth. Through caching and other optimizations, CDNs can reduce the amount of data an origin server must provide, reducing the costs of hosting for website owners.

Increase content availability Too many visitors at one time or network hardware failures can cause a website to crash. CDN services can handle more web traffic and reduce the load on web servers. Also, if one or more CDN servers go offline, other operational servers can replace them to ensure uninterrupted service.

Improve website security Distributed denial-of-service (DDoS) attacks attempt to take down applications by sending large amounts of fake traffic to the website. CDNs can handle such traffic spikes by distributing the load between several intermediary servers, reducing the impact on the origin server.

To elaborate further into the benefits:

  1. Reduce page load time - CDNs significantly improve page load times by caching content at edge servers closer to the end user and reducing latency through optimized network paths. However, if your use case doesn’t involve caching (e.g., dynamic content) or your visitors are geographically close to your origin servers, the benefits may be less pronounced, though routing optimizations can still provide some improvement.
  2. Reduce bandwidth costs - CDNs can significantly lower bandwidth costs by reducing the amount of data served directly from the origin server. Egress costs are generally lower when using a CDN compared to standard cloud egress charges (e.g., AWS and Azure offer reduced per-GB rates for CDN traffic). For organizations with high traffic volumes, this can result in considerable cost savings.
  3. Increase content availability - By caching content, a CDN provides a fallback mechanism, allowing it to serve static or cacheable content even if the origin server encounters disruptions. This helps ensure uninterrupted service during server outages or traffic spikes. However, the benefit depends on the type of content being served, as dynamic content typically requires communication with the origin server.
  4. Improve website security - Most CDNs offer a free tier of DDoS protection by default. For example, AWS includes Shield Standard and Shield Advanced, while other providers, such as Cloudflare, offer free protection tiers along with options like Spectrum and Magic Transit.

How does a CDN work?

https://aws.amazon.com/what-is/cdn/#seo-faq-pairs#how-does-a-cdn-work

Content delivery networks (CDNs) work by establishing a point of presence (POP) or a group of CDN edge servers at multiple geographical locations. This geographically distributed network works on the principles of caching, dynamic acceleration, and edge logic computations.

Caching

Caching is the process of storing multiple copies of the same data for faster data access. In computing, the principle of caching applies to all types of memory and storage management. In CDN technology, the term refers to the process of storing static website content on multiple servers in the network. Caching in CDN works as follows:

  1. A geographically remote website visitor makes the first request for static web content from your site.
  2. The request reaches your web application server or origin server. The origin server sends the response to the remote visitor. At the same time, it also sends a copy of the response to the CDN POP geographically closest to that visitor.
  3. The CDN POP server stores the copy as a cached file.
  4. The next time this visitor, or any other visitor in that location, makes the same request, the caching server, not the origin server, sends the response.

Dynamic acceleration

Dynamic acceleration is the reduction in server response time for dynamic web content requests because of an intermediary CDN server between the web applications and the client. Caching doesn’t work well with dynamic web content because the content can change with every user request. CDN servers have to reconnect with the origin server for every dynamic request, but they accelerate the process by optimizing the connection between themselves and the origin servers.

If the client sends a dynamic request directly to the web server over the internet, the request might get lost or delayed due to network latency. Time might also be spent opening and closing the connection for security verification. On the other hand, if the nearby CDN server forwards the request to the origin server, they would already have an ongoing, trusted connection established. For example, the following features could further optimize the connection between them:

  • Intelligent routing algorithms
  • Geographic proximity to the origin
  • The ability to process the client request, which reduces its size

Edge logic computations

You can program the CDN edge server to perform logical computations that simplify communication between the client and server. For example, this server can do the following:

  • Inspect user requests and modify caching behavior.
  • Validate and handle incorrect user requests.
  • Modify or optimize content before responding.

Distribution of application logic between the web servers and the network edge helps developers offload origin servers’ compute requirements and improve website performance.

AWS CloudFront HTTP Request Analysis

To better understand what happens in the background when making an HTTP GET request to a CDN, I created the following setup:

  1. I deployed an EC2 server configured to respond with a simple HTTP response:
    HTTP/1.1 200 OK
  2. I created a CloudFront CDN distribution and configured it to forward all incoming HTTP requests, including headers, directly to the EC2 server as the origin.

This setup allowed me to analyze the interactions between the client, CDN, and origin server in detail.

IMPORTANT NOTE!

This flow explains how the first request to a CDN looks when caching is not yet in effect. For subsequent requests (assuming caching is enabled), steps 2 and 3 will typically be skipped. Instead, the cached response will be returned directly by the CDN, and the X-Cache response header will show Hit from cloudfront. However, if the cache is explicitly invalidated (e.g., via manual CloudFront invalidations), the CDN may still contact the origin server to fetch updated content.

The flow of an HTTP request through the CDN can be summarized in four steps:

  1. Client sends HTTP GET request to CDN
  2. CDN forwards client request to Origin
  3. Origin responds to CDN
  4. CDN responds to Client

Here’s a visualization of the request flow:

Client                 CDN                     Origin
   |                     |                        |
   |---- HTTP GET -----> |                        |
   |                     |----- HTTP GET -------> |
   |                     | <--- HTTP Response ----|
   | <--- HTTP Response--|                        |

To verify this request flow, I sent an HTTP request to the CDN using netcat and then inspected how the request was forwarded to the origin server.

Request sent to the CDN:

nc d123456789.cloudfront.net 80
GET / HTTP/1.1
Host: d123456789.cloudfront.net

Request as it arrived at the origin (as forwarded by the CDN):

GET / HTTP/1.1
Host: d123456789.cloudfront.net
Connection: keep-alive
X-Amz-Cf-Id: 1ASasdasdasdasd_7NCxadsadassddaepypcsGa83VlQPzum5w==
Via: 1.1 asd12d1dsadasdasdsadasd.cloudfront.net (CloudFront)
X-Forwarded-For: 1.2.3.4
user-agent: Amazon CloudFront

Meaning the following headers were added by the CDN before forwarding to origin

  • Connection

    • When set to keep-alive, the CDN keeps a persistent TCP connection to the origin server, reusing it for multiple requests. This eliminates the need for repeated handshakes, reducing latency and speeding up interactions after the initial request.
    • The keep-alive timeout can be customized in the CDN → Origin → Additional Settings menu, but only for custom origins (e.g., a non-AWS domain or server).
    • For AWS-managed origins like S3 or API Gateway, the keep-alive settings cannot be modified.
  • X-Amz-Cf-Id

    • A unique identifier for each request, useful for debugging with AWS Support.
  • Via

    • Includes the HTTP version and the internal DNS name of the CloudFront node in the format http-version alphanumeric-string.cloudfront.net (CloudFront), where http-version indicates the HTTP protocol version used, and alphanumeric-string is the internal DNS identifier for the node. More general information about the Via header can be found in RFC7230 Section 5.7.1.
  • X-Forwarded-For

    • This header is better explained in the AWS documentation. Note that if a client uses a proxy before accessing the CDN, multiple IPs may appear in this header, with the original client IP always listed last.
  • user-agent

    • If no user-agent header is specified in the request, CloudFront sets it to Amazon CloudFront.

I received the following response from the CDN:

HTTP/1.1 200 OK
Connection: keep-alive
date: Sun, 17 Nov 2024 20:07:09 GMT
X-Cache: Miss from cloudfront
Via: 1.1 asd12d1dsadasdasdsadasd.cloudfront.net (CloudFront)
X-Amz-Cf-Pop: VIE50-P2
X-Amz-Cf-Id: 1ASasdasdasdasd_7NCxadsadassddaepypcsGa83VlQPzum5w==

Meaning the following headers were added by the CDN before responding to the client:

  • Connection

    • The Connection header is set to keep-alive, allowing the same TCP connection between the client and the CDN to be reused for multiple requests. This is distinct from the keep-alive connection between the CDN and the origin server, which operates independently.
    • To verify this behavior
      • Use tcpdump or similar packet capture tools on the client side. If the connection is set to keep-alive, you’ll see only one TCP handshake (SYN, SYN-ACK, ACK) for multiple HTTP requests.
      • Alternatively, check active connections using netstat and observe that the same TCP connection remains open between the client and the CDN.
    • If the Connection header is set to close, a new TCP handshake will occur for every request, resulting in separate connections each time.
  • date

    • The Date header shows the time the server generates the response, not when the client receives it. If there is high network latency or the response body is large, the time shown in the Date header will be earlier than the client’s clock because of the time it takes for the response to travel to the client.
  • X-Cache

    • Indicates the cache status of the request. Common values include Hit, RefreshHit, Miss, LimitExceeded, CapacityExceeded, Error, and Redirect. For detailed explanations of these values, refer to the Amazon CloudFront real-time logs documentation. The x-edge-result-type log field in real-time logs provides a detailed breakdown of each cache action.
  • Via

    • Contains the same value as received from the origin, including the HTTP version and the CloudFront node’s internal DNS name.
  • X-Amz-Cf-Pop

    • The edge location that served the request is identified by a three-letter code and a number (e.g., DFW3). The code usually matches the International Air Transport Association (IATA) airport code for a nearby airport, though these codes might change in the future. For more details, check out the Amazon CloudFront real-time logs documentation under the x-edge-location log field.

    • It’s worth noting that the edge location may not always correspond to the nearest geographic region, as factors like load balancing, traffic patterns, or outages can influence the selection.
  • X-Amz-Cf-Id

    • Contains the same unique identifier as received by the origin, used for request tracing and debugging with AWS Support.

Cloufront Edge Locations

https://aws.amazon.com/cloudfront/features

Amazon CloudFront peers with thousands of Tier 1/2/3 telecom carriers globally, is well connected with all major access networks for optimal performance, and has hundreds of terabits of deployed capacity. CloudFront edge locations are seamlessly connected to AWS Regions through the fully redundant AWS network backbone. This backbone is comprised of multiple 400GbE parallel fibers across the globe and interfaces with tens of thousands of networks for improved origin fetches and dynamic content acceleration.

Amazon CloudFront has three types of infrastructure to securely deliver content with high performance to end users:

  • CloudFront Regional Edge Caches (RECs) are situated within AWS Regions, between your applications’ web server and CloudFront Points of Presence (POPs) and embedded Points of Presence. CloudFront has 13 RECs globally.
  • CloudFront Points of Presence are situated within the AWS network and peer with internet service provider (ISP) networks. CloudFront has 600+ POPs in 100+ cities across 50+ countries.
  • CloudFront embedded Points of Presence are situated within internet service provider (ISP) networks, closest to end viewers. In addition to CloudFront POPs, there are 600+ embedded POPs across 200+ cities in North America, Europe, and Asia.

In the HTTP request flow diagram in the AWS CloudFront HTTP Request Analysis section, I grouped the edge cache and regional cache under the “CDN” for simplicity. However, the actual flow looks more like this:

  1. The Edge Cache closest to the client handles the initial request.
  2. If the object is not in the edge cache, it is forwarded to the Regional Cache(introduced in 2016, which reduces the need to contact the origin directly for every request.
  3. If the object is not in the regional cache, the request is forwarded to the Origin Server.
  4. The response follows the same path back to the client, with each cache layer storing the object for future requests.
Client                 CDN Edge            CDN Regional Cache          Origin
   |                     |                        |                       |
   |-------------------> |                        |                       |
   |                     |----------------------> |                       |
   |                     |                        |---------------------> |
   |                     |                        | <---------------------|
   | <-------------------| <----------------------|                       |

This flow applies unless one of the following conditions is met:

  • Proxy HTTP methods (PUT, POST, PATCH, OPTIONS, and DELETE) go directly to the origin from the POPs and do not proxy through the regional edge caches.
  • Dynamic requests, as determined at request time, do not flow through regional edge caches, but go directly to the origin.
  • When the origin is an Amazon S3 bucket and the request’s optimal regional edge cache is in the same AWS Region as the S3 bucket, the POP skips the regional edge cache and goes directly to the S3 bucket.

You can confirm this behavior by monitoring network traffic at your origin (using tools like VPC Flow Logs or tcpdump) and comparing the incoming request’s IP address against the list of CloudFront IP subnets available here.

  • If the IP address belongs to the CLOUDFRONT_GLOBAL_IP_LIST subnets, it indicates the request is coming from a regional cache.
  • If the IP address is part of the CLOUDFRONT_REGIONAL_EDGE_IP_LIST subnets, the request is coming directly from an edge location, which happens if one of the exceptions to the standard flow (listed above) applies.

For a full list of all available CloudFront Edge Locations, refer to this resource.

Identifying and Targeting Specific CloudFront Edge Locations

This section is the main reason I wanted to write this article, although it turned into a deep dive into CloudFront along the way. While much of the information here can be found in AWS documentation, blog posts, or videos, I wanted to consolidate it and share the practical insights I gathered.

I was trying to figure out a problem that began when an hourly AWS Lambda function started failing intermittently. After some investigation, I discovered that the failures occurred when the Lambda made requests to a specific CloudFront edge location. To debug and reproduce the issue, I needed a way to consistently target that particular edge location. This section explains the process I used to identify the IP addresses of specific edge locations and route requests to them.

I created a script to identify which CloudFront edge location corresponds to specific IP addresses. The script works by sending requests to IPs from the CLOUDFRONT_GLOBAL_IP_LIST and reading the x-amz-cf-pop response header. This header contains the edge location code (e.g., FRA6-C1 for a location in France). The script then maps each IP address to the value returned by the x-amz-cf-pop header.

How the Script Works

  1. The script sends a request to the first two IP addresses in each subnet listed in the CLOUDFRONT_GLOBAL_IP_LIST.
    1. Querying all IPs (12.9 million) would be impractical, so this approach drastically reduces processing time while maintaining accuracy.
  2. For each successful response, it extracts the x-amz-cf-pop value from the response headers.
  3. It maps the IP address to the returned x-amz-cf-pop value, creating a list of IP-to-edge-location mappings.

Here’s the script I used: https://gist.github.com/Gonii/a00086cf5d1889c114e0668e2d344803

Below is the relevant output generated by the script. I’ve skipped errors and timeouts because error handling wasn’t implemented:

IP: 205.251.249.1, x-amz-cf-pop: MRS52-C2
IP: 205.251.249.2, x-amz-cf-pop: MRS52-C2
IP: 18.160.0.1, x-amz-cf-pop: IAD12-P3
IP: 18.160.0.2, x-amz-cf-pop: IAD12-P3
IP: 54.192.0.1, x-amz-cf-pop: MRS52-C2
IP: 54.192.0.2, x-amz-cf-pop: MRS52-C2
IP: 54.230.200.1, x-amz-cf-pop: MSP50-C2
IP: 54.230.200.2, x-amz-cf-pop: MSP50-C2
IP: 108.156.0.1, x-amz-cf-pop: MXP63-P4
IP: 108.156.0.2, x-amz-cf-pop: MXP63-P4
IP: 99.86.0.1, x-amz-cf-pop: FRA6-C1
IP: 99.86.0.2, x-amz-cf-pop: FRA6-C1
IP: 13.224.0.1, x-amz-cf-pop: SEA19-C2
IP: 13.224.0.2, x-amz-cf-pop: SEA19-C2
IP: 18.238.0.1, x-amz-cf-pop: PHL51-P1
IP: 18.238.0.2, x-amz-cf-pop: PHL51-P1
IP: 18.244.0.1, x-amz-cf-pop: BUD50-P2
IP: 18.244.0.2, x-amz-cf-pop: BUD50-P2
IP: 65.9.128.1, x-amz-cf-pop: QRO50-C1
IP: 65.9.128.2, x-amz-cf-pop: QRO50-C1
IP: 205.251.206.1, x-amz-cf-pop: CDG52-P1
IP: 205.251.206.2, x-amz-cf-pop: CDG52-P1
IP: 54.230.208.1, x-amz-cf-pop: ORD53-C2
IP: 54.230.208.2, x-amz-cf-pop: ORD53-C2
IP: 3.160.0.1, x-amz-cf-pop: CMH68-P4
IP: 3.160.0.2, x-amz-cf-pop: CMH68-P4
IP: 52.222.128.1, x-amz-cf-pop: FCO50-C2
IP: 52.222.128.2, x-amz-cf-pop: FCO50-C2
IP: 18.164.0.1, x-amz-cf-pop: LIM50-P2
IP: 18.164.0.2, x-amz-cf-pop: LIM50-P2
IP: 54.230.224.1, x-amz-cf-pop: ATL56-C1
IP: 54.230.224.2, x-amz-cf-pop: ATL56-C1
IP: 18.172.0.1, x-amz-cf-pop: BKK50-P3
IP: 18.172.0.2, x-amz-cf-pop: BKK50-P3
IP: 3.164.64.1, x-amz-cf-pop: HEL51-P4
IP: 3.164.64.2, x-amz-cf-pop: HEL51-P4
IP: 18.154.0.1, x-amz-cf-pop: CGK51-P2
IP: 18.154.0.2, x-amz-cf-pop: CGK51-P2
IP: 54.230.0.1, x-amz-cf-pop: KIX56-C2
IP: 54.230.0.2, x-amz-cf-pop: KIX56-C2
IP: 143.204.0.1, x-amz-cf-pop: YTO50-C3
IP: 143.204.0.2, x-amz-cf-pop: YTO50-C3
IP: 3.164.0.1, x-amz-cf-pop: GRU1-P4
IP: 3.164.0.2, x-amz-cf-pop: GRU1-P4
IP: 54.182.0.1, x-amz-cf-pop: BOM52-C1
IP: 54.182.0.2, x-amz-cf-pop: BOM52-C1
IP: 54.239.192.1, x-amz-cf-pop: MUC50-P5
IP: 54.239.192.2, x-amz-cf-pop: MUC50-P5
IP: 18.64.0.1, x-amz-cf-pop: ICN57-P2
IP: 18.64.0.2, x-amz-cf-pop: ICN57-P2
IP: 99.84.0.1, x-amz-cf-pop: LHR62-C2
IP: 99.84.0.2, x-amz-cf-pop: LHR62-C2
IP: 204.246.164.1, x-amz-cf-pop: SIN2-C1
IP: 204.246.164.2, x-amz-cf-pop: SIN2-C1
IP: 13.35.0.1, x-amz-cf-pop: TPE52-C1
IP: 13.35.0.2, x-amz-cf-pop: TPE52-C1
IP: 3.164.128.1, x-amz-cf-pop: NRT12-P3
IP: 3.164.128.2, x-amz-cf-pop: NRT12-P3
IP: 65.8.0.1, x-amz-cf-pop: CCU50-C2
IP: 65.8.0.2, x-amz-cf-pop: CCU50-C2
IP: 108.138.0.1, x-amz-cf-pop: FRA56-P6
IP: 108.138.0.2, x-amz-cf-pop: FRA56-P6

This mapping shows the association between IP addresses and their respective edge locations. For example:

  • IP 99.86.0.1 corresponds to FRA6-C1 (an edge in France).
  • IP 54.230.200.1 corresponds to MSP50-C2 (an edge in the United States).

Routing Requests to Specific Edge Locations

Once you have the IP address for a desired edge location, you can route requests to it in two ways:

Option 1: Modify Your Hosts File

To route requests through a specific edge location, add an entry in your system’s hosts file:

  • Linux/macOS: /etc/hosts
  • Windows: C:\Windows\System32\drivers\etc\hosts

For example, to route requests to the edge location FRA6-C1 in France (99.86.0.1):

d123456789.cloudfront.net 99.86.0.1

This forces your system to resolve d123456789.cloudfront.net to the specified IP address.

Option 2: Test with cURL

You can test targeting an edge location without modifying your hosts file by using cURL with the --header option:

curl --header 'Host: d123456789.cloudfront.net' -I "http://99.86.0.1"

This command sends a request directly to the IP 99.86.0.1 while keeping the Host header set to the original domain. This ensures that the request is routed through the CDN as usual but specifically targets the edge location associated with the IP address 99.86.0.1. By doing this, you can control which edge location handles the request, allowing you to test or debug behavior specific to that edge location.

CloudFront Behaviors

Surprisingly, there isn’t much AWS documentation available about behaviors.

Essentially, a behavior is a configuration in CloudFront that you can apply to specific HTTP paths. These paths can include:

  • The default path (automatically created when you set up a CDN).
  • A path prefix (e.g., /api/*).

When creating a behavior in CloudFront, you can configure various options based on your requirements:

  • Path Pattern - Specify the path or pattern this behavior applies to (e.g., /api/* or /assets/).
  • Origin or Origin Group - Choose the origin or origin group to forward requests to. Origin groups are particularly useful for failover scenarios.
  • Cache Policy - Define which headers, query parameters, and cookies to include in the cache key, and set the cache expiration period (from 0 to 100 years).
    • Use a managed cache policy if it fits your use case; it simplifies management.
    • More details on cache policies: AWS Documentation
  • Origin Request Policy - Specify which headers, query parameters, and cookies to forward to the origin.
    • Note: Headers, query parameters, and cookies included in the cache policy are automatically forwarded to the origin.
    • More details on origin request policies: AWS Documentation
  • Compression - See the next section, Cloudfront Caching and Compression, for more details.
  • Viewer Protocol Policy - Specify whether to allow HTTP, HTTPS, or automatically redirect HTTP to HTTPS.
  • Allowed HTTP methods - Define the HTTP methods that are permitted for this behavior (e.g., GET, POST).
  • Restrict Viewer Access - I haven’t used this feature personally, but from my understanding, it’s used to restrict access to objects served by the CDN. This is done through signed URLs or cookies. You generate a private/public key pair, where the private key is used to sign requests. More details can be found in the AWS documentation on signed URLs and cookies
  • Response Header Policy - Configure response headers, such as CORS settings, or add/remove headers before sending responses to users.
    • Here, you can turn on the Server-Timing header, a feature I wasn’t aware of until writing this post. It provides detailed information about the CDN’s behavior, including the POP that handled the request, whether the request reached the origin, involvement of regional caches, and the timing of when the request was received by the POP. You can learn more about this feature here: AWS Documentation
  • Function Associations - Function Associations let you modify requests and responses at different stages, like after a request is received, before going to the origin, or before sending a response. You can adjust headers, add authorization, or run custom logic.

You can use legacy cache settings instead of defining a cache policy and an origin request policy. However, unlike the origin request policy and cache policy, legacy cache settings don’t allow you to add extra origin request headers that aren’t part of the cache policy. This is because all headers specified in the legacy cache settings are automatically forwarded as part of the origin request.

Another key consideration when configuring cache policies is to properly handle headers related to authorization or user-specific data. For example, if your frontend app is served via a CDN and caches responses, a user logging in might send credentials in the request headers. If your cache policy doesn’t account for these headers, the login response could be cached and served to another user accessing the same page, exposing sensitive information.

CloudFront Caching and Compression

Caching is one of the most critical aspects of a CDN. In CloudFront, what gets cached is determined by the criteria set in CloudFront behaviors, which include HTTP headers, query strings, and cookies.

What is a Cache Key?

According to the documentation, the term “cache key” refers to two things at the same time:

  1. The criteria used to form the cache key: This includes headers, query strings, and cookies defined in the cache policy.
  2. The unique identifier (the actual key): CloudFront computes this internally based on the criteria and uses it to locate cached responses.

For a deep dive into how CloudFront cache keys work, refer to the official documentation: Understanding the Cache Key.

Cache Key Computation and Lookup Process

This is how the cache key computation and lookup process works:

  1. A user sends a request to the CDN.
  2. CloudFront internally computes a cache key identifier based on the request and compares it to existing cache keys:
    • If the cache key identifier already exists, CloudFront responds with the cached result.
    • If the cache key identifier does not exist, CloudFront forwards the request to the origin, retrieves the response, and saves it in the cache.

How the Cache Key Works and Is Formed

By default, the cache key consists of the request path and the CDN’s domain name. This means caching will work even if no headers, query strings, or cookies are specified in the cache policy.

For example, consider the following request: curl http://d123456789.cloudfront.net/

CloudFront generates an internal cache key identifier (invisible to the user) based on the following criteria:

  • Domain name: d123456789.cloudfront.net
  • Path: /

Assume that the cache key identifier generated is d123456789-1. The next time a user makes the same request: curl http://d123456789.cloudfront.net/

CloudFront will serve the cached response (a cache hit) because the criteria match the d123456789-1 cache key.

Now, consider a different request: curl http://d123456789.cloudfront.net/some/random/path

In this case, CloudFront generates a new cache key identifier based on:

  • Domain name: d123456789.cloudfront.net
  • Path: /some/random/path

Assume this identifier is d123456789-2. If another user sends the same request, CloudFront will serve the cached response (a cache hit) using the d123456789-2 cache key.

The cache key configuration should align with your specific use case.

  • For example, if your CDN serves a frontend website with separate experiences for desktop and mobile users, you should include the User-Agent header in your cache key configuration. This ensures that responses for mobile devices are not cached and served to desktop users, and vice versa.

  • Similarly, if your website provides localized content based on a user’s language preference, you should include the Accept-Language header in your cache key configuration. This ensures that users receive the correct version of the website (e.g., English or French), instead of caching a single version for all users.

Another important feature worth mentioning is invalidations. You’ll need this when changes are made to your origin, such as fixing a bug, updating code, or replacing outdated content, but the old version remains cached in the CDN. Invalidations ensure that the CDN retrieves the updated files from the origin.

Invalidations remove cached files from all edge locations and regional edge caches. After a file is invalidated, the next request for it fetches the updated version from the origin, which is then cached again.

You can use wildcards in invalidations. For example, to invalidate everything under the /images/ path, you can specify /images/*.

CloudFront provides 1,000 free invalidations per month. Beyond this limit, each additional invalidation costs $0.005.

For more information, refer to the AWS documentation on invalidations.

Compression

Before enabling compression in CloudFront, several requirements must be met:

  • Accept-Encoding header is required: The request must include Accept-Encoding with gzip or br for compression.
  • Dynamic content: CloudFront does not consistently compress dynamic content. It applies compression based on internal criteria that are not fully documented. Despite thorough attempts to analyze its behavior, I couldn’t identify a reliable or consistent pattern for when dynamic content is compressed.
  • File types must be supported by CloudFront: Only certain file types are eligible for compression; see supported file types.
  • Object size limits: CloudFront compresses only objects between 1,000 bytes and 10,000,000 bytes.
  • Content-Length header must be present: The origin’s response must include a valid Content-Length header for CloudFront to determine if the object size is within the compressible range.
  • Response status code must be 200, 403, or 404: Compression applies only to these HTTP status codes.
  • Response must have a body: CloudFront does not compress responses without a body (e.g., HTTP 204 No Content).

Note: If content is already cached when you enable compression, it won’t be compressed. To ensure compression is applied, you’ll need to invalidate the cached objects so CloudFront can fetch them from the origin and compress them.

It’s also important to understand that CloudFront compresses objects on a best-effort basis, so you shouldn’t rely on compression as a critical part of your stack:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html#compressed-content-cloudfront-notes

CloudFront compresses objects on a best-effort basis. In rare cases, CloudFront skips compression. CloudFront makes this decision based on a variety of factors, including host capacity. If CloudFront skips compression for an object, it caches the uncompressed object and continues to serve it to viewers until the object expires, is evicted, or is invalidated.

Staging CDN Distributions

The primary purpose of Staging CDN Distributions is to allow you to test changes to your CDN configuration without affecting your real users.

When you make a configuration change, you can use either weight-based routing or header-based routing to direct a portion of traffic to the staging CDN for testing.

The Staging Distribution does not have its own specific domain or endpoint. Instead, viewers send their requests to the Primary (production) Distribution, and CloudFront routes some of those requests to the staging distribution based on the traffic configuration settings defined in the continuous deployment policy.

In some cases, you may need to verify whether a request reaches your origin successfully without being served from the cache. This can be useful when your Production Distribution is in use, and you don’t want to invalidate the cache just for testing. Instead, you can use the Staging Distribution to disable caching and configure the traffic routing to be header-based. By appending a specific header to your HTTP request, you can ensure the request bypasses the cache and goes directly to the origin for testing.

For more details, refer to the AWS documentation on continuous deployment.


Logging

CloudFront offers two options for logging:

  1. Standard Logs
    • Delivered to an S3 bucket within 5–10 minutes.
    • Contains less detailed information but sufficient for basic logging needs.
  2. Real-Time Logs
    • Configured through AWS Kinesis and can be streamed to S3 or CloudWatch log groups.
    • Provides more detailed and near real-time logging, useful for in-depth analysis.

Here is an example log entry from a CDN serving static files:

date: 2024-08-07
time: 06:41:17
x-edge-location: DUB56-P2
sc-bytes: 1540
cs-ip: 1.2.3.4
cs-method: GET
cs(Host): d123456789.cloudfront.net
cs-uri-stem: /metadata.json
sc-status: 200
cs(Referer): -
cs(User-Agent): -	
cs-uri-query: -
cs(Cookie): -
x-edge-result-type: Hit
x-edge-request-id: asdhiuashduihd8712d128d1udbhj12bdhj1bd==
x-host-header: d123456789.cloudfront.net
cs-protocol: https
cs-bytes: 149
time-taken: 0.001
x-forwarded-for: -
ssl-protocol: TLSv1.3
ssl-cipher: TLS_AES_128_GCM_SHA256
x-edge-response-result-type: Hit
cs-protocol-version: HTTP/1.1
fle-status: -
fle-encrypted-fields: -
c-port: 53218
time-to-first-byte: 0.001
x-edge-detailed-result-type: Hit
sc-content-type: application/json
sc-content-len: 728
sc-range-start: -
sc-range-end: - 

Some important log fields worth explaining:

  • The date and time Fields - These represent the time when CloudFront finished processing the request and prepared the response to be sent to the client. This timestamp reflects when the response was ready for transmission, not when the client actually received it. The actual time the client receives the response may vary depending on network latency.
  • x-edge-location - Refers to the edge location (Point of Presence or POP) used to serve the client’s request.
  • sc-* - (Server to Client): Represents data sent from CloudFront to the client.
    • sc-bytes - Number of bytes sent from CloudFront to the client.
    • sc-status - HTTP status code returned by CloudFront to the client.
  • cs-* - (Client to Server): Represents data sent from the client to CloudFront.
    • c-port - The port used by the client for the HTTP connection.
    • cs-ip - The IP address of the client making the request.
    • cs(Host) - The Host header sent by the client in the request.
    • cs-method - The HTTP method (e.g., GET, POST) used by the client.
    • cs-uri-stem - The path portion of the URL, excluding any query strings.
  • x-edge-result-type - Indicates whether the request resulted in a cache hit, miss, or other result.
  • x-edge-request-id - A unique identifier for each request processed by CloudFront.
  • x-host-header - Different from cs(Host). Represents the actual host header used by CloudFront, which may differ if using advanced features like alternate domain names.

Standard Logs vs. Real-Time Logs

  • Standard Logs:
    • Cheaper, slower to be delivered (5–10 minutes), and less detailed.
    • Best suited for regular operations where detailed insights are not required.
  • Real-Time Logs:
    • More detailed and provide near real-time insights.
    • Useful for analyzing latency, errors, or other in-depth metrics between CloudFront and the origin.
    • More expensive, as you need to configure an Amazon Kinesis Data Stream to ingest the logs.

Recommendation: Enable standard logs for day-to-day monitoring and operations. Use real-time logs only when detailed analysis is required, as they are more costly and resource-intensive.

Final Thoughts

This article doesn’t cover everything about CDNs, such as in-depth details on cache policies, CORS, edge functions, security, or managing cache behavior at the origin using cache-control headers. Covering these topics in detail would make this article much longer, so I’ve focused on the essentials for now.

In general, I recommend using a CDN when serving static content, especially if your clients are in regions far from your server. A CDN significantly reduces latency and improves performance in these scenarios.

Using a CDN for dynamic content, however, requires more configuration and careful consideration. Unless you have a strong use case or specific requirements, I’d suggest avoiding CDNs for dynamic content as the added complexity may not justify the benefits.