We may want to take advantage of SSR capabilities in React 18. Being able to render our React frontend in a serverless environment and being able to suspend parts of our frontend application without needing to render the whole application. We will start by looking at a plain React 18 example.

Our serverless.yml file for the frontend serverless service:

service: chorobin-ssr-react-dev-frontend
 
provider:
 name: aws
 runtime: nodejs14.x
 region: ${env:AWS_REGION}
 stage: dev
 
functions:
 render:
   handler: src/server/index.handler
   events:
     - http: 'GET /'
     - http: 'GET /{proxy+}'
 
plugins:
 - serverless-output-file-tracing-plugin
 - serverless-esbuild
 
custom:
 esbuild:
   bundle: true
   minify: false
   exclude: '*'

The aim here is to deploy a serverless function to AWS which renders our React 18 application.

The service and provider is self explanatory to developers who have used the serverless framework before. The functions section defines our AWS lambda to render our React application. It also defines http events to set up an AWS API Gateway and forwards all requests to our lambda function.

The more interesting section is plugins. I have written our own custom plugin for Output File Tracing (which makes use of @vercel/nft). Output File Tracing is a feature of NextJS that analyses requires/imports in our code and provides a list of files for us to copy over to our build artifacts. This vastly reduces the size of our node_modules folder for deployment to AWS. The serverless-esbuild plugin bundles our lambda function for deployment but excludes bundling dependencies. You can of course not use Output File Tracing and bundle dependencies too. In simple examples this will likely work but for NextJS 12 this will not. There are other situations where you need to keep packages in node_modules and not bundle for other purposes.

Now lets look at our lambda function, of which the contents looks like this:

import type { Request, Response } from 'express';
import serverless from 'serverless-http';
import { render } from './render';
 
export const handler = serverless((req: Request, res: Response) => {
 render(req, res);
});

We usually write everything in TypeScript. Only types are imported from express which ensures that express is not bundled or included in the serverless artifact. serverless-http is used to provide a compatibility layer between an express application and AWS Lambda. A separate render function is required because the serverless framework only supports .ts extensions for AWS lambda (not .tsx). The lambda function is defined by wrapping a function with serverless-http

The actual render function looks like this (.tsx):

import type { Request, Response } from 'express';
import * as React from 'react';
import * as ReactDOMServer from 'react-dom/server';
import { Transform } from 'stream';
import { App } from '../App';
 
const toBufferTrasform = new Transform({
 transform(chunk: Uint8Array, _, callback) {
   callback(null, Buffer.from(chunk));
 },
});
 
export const render = (_: Request, response: Response) => {
 const stream = ReactDOMServer.renderToPipeableStream(<App />, {
   onShellReady() {
     response.statusCode = 200;
     response.setHeader('Content-type', 'text/html');
     stream.pipe(toBufferTrasform).pipe(response);
   },
 });
};

This is where things start to get a bit more interesting. A transform stream is required to transform a Uint8Array into a Buffer. This is because React 18 will return streamed data in Uint8Array and serverless-http does not support this data type. The render function calls the new ReactDOMServer.renderToPipeableStream  in React 18. It is important to note that there are some hooks defined with this function. onShellReady is called when all components are rendered before the suspense boundaries. Unfortunately AWS Lambda does not support Web Streams (Deno does) which would enable React to stream further updates after the lambda has finished rendering in the suspense boundaries.

When we package our serverless artifact it is way under the size requirements set out by AWS. Remember that AWS has size restrictions of 50MB zipped and 250MB unzipped when deploying AWS Lambda’s. Frontend dependencies can get very large which is why it is important to optimise this. When using Output File Tracing and esbuild our simple example is 211KB zipped and 680KB unzipped.

NextJS 12+

There are a number of ways to build serverless applications with NextJS. We needed something which would fit well into our current tech stack, using the traditional serverless framework. There used to be a ‘serverless’ build target which bundled NextJS into individual functions. However with NextJS 12, this has been deprecated. The problem with getting a NextJS server into a serverless environment is the size of the deployment artifact. Output File Tracing solves this problem by reducing the size of dependencies.

A simple example of a serverless.yml file for nextjs could look like this:

service: chorobin-ssr-react-frontend-next
 
provider:
 name: aws
 runtime: nodejs14.x
 region: ${env:AWS_REGION}
 
package:
 patterns:
   - '.next'
 
functions:
 render:
   handler: handlers/index.handler
   events:
     - http: 'GET /'
     - http: 'GET /{proxy+}'
 
plugins:
 - serverless-output-file-tracing-plugin
 - serverless-esbuild
 
custom:
 esbuild:
   plugins: plugins.js
   bundle: true
   minify: true
   exclude: '*'
 outputFileTracing:
   additionalFiles:
     - ./pages/index.tsx

This is extremely similar to our React 18 example, with the exception of packaging the .next folder in the serverless artifact. Output File Tracing takes care of tracing our AWS Lambda for dependencies that the plugin can then include when packaging. We add a page to additional files in order to include any dependencies which the pages may depend on.

And our AWS Lambda can look like this:


import type { Request, Response } from 'express';
import serverless from 'serverless-http';
import { parse } from 'url';
import NextServer from 'next/dist/server/next-server';
 
import { config } from '../.next/required-server-files.json';
import { NextConfig } from 'next';
 
const server = new NextServer({
 dev: false,
 dir: __dirname,
 conf: {
   ...(config as NextConfig),
   distDir: '../.next',
 },
});
 
const handle = server.getRequestHandler();
 
export const handler = serverless(async (req: Request, res: Response) => {
 const parsedUrl = parse(req.url, true);
 await handle(req, res, parsedUrl);
});

We currently need to create a NextServerwith the config provided in the .next build and get the request handler from the NextServer. The request handler is called on each request of the lambda function.

There are a few things which we have not covered yet. First of all, the client cannot download the assets required to run the NextJS application. The most efficient way to allow this is to store the NextJS assets in a S3 bucket, so our lambda is not called to serve these assets.

We can create a bucket in our resources section of the serverless.yml file

resources:
 Resources:
   FrontendBucket:
     Type: AWS::S3::Bucket
     Properties:
       BucketName: ${self:service}

This creates the bucket on deployment but we also need to upload the static assets to S3 after deploying the stack. For this we can use the serverless-finch plugin.

plugins:
 - serverless-output-file-tracing-plugin
 - serverless-esbuild
 - serverless-finch

And define what bucket we want to upload our static assets too under client custom configuration:

custom:
 esbuild:
   plugins: plugins.js
   bundle: true
   minify: true
   exclude: '*'
 outputFileTracing:
   additionalFiles:
     - ./pages/index.tsx
 client:
   bucketName: ${self:service}
   distributionFolder: dist

It is important to note that we need to prepare our static assets from the .next build to the dist folder where serverless-finch will upload the assets. You can write a simple script for this.

rm -rf dist
 
mkdir -p dist/_next
 
cp -R ./.next/static dist/_next/static

The script cleans up the dist directory, creates a new directory for dist/_next and copies the files over from the next build to the new dist directory.

The next step is to create a CloudFront distribution to cache and forward requests to either the AWS Lambda or to the S3 bucket depending on if it is a static asset or not.

resources:
 Resources:
   FrontendBucket:
     Type: AWS::S3::Bucket
     Properties:
       BucketName: ${self:service}
   FrontendDistribution:
     Type: AWS::CloudFront::Distribution
     Properties:
       DistributionConfig:
         Enabled: true
         Origins:
           - Id: render
             CustomOriginConfig:
               OriginProtocolPolicy: https-only
             DomainName:
               {
                 'Fn::Join':
                   [
                     '',
                     [
                       { 'Ref': 'ApiGatewayRestApi' },
                       '.execute-api.${self:provider.region}.amazonaws.com',
                     ],
                   ],
               }
             OriginPath: '/${self:provider.stage}'
           - Id: bucket
             CustomOriginConfig:
               OriginProtocolPolicy: http-only
             DomainName:
               {
                 'Fn::Select':
                   [
                     1,
                     {
                       'Fn::Split':
                         ['http://', { 'Fn::GetAtt': ['FrontendBucket', 'WebsiteURL'] }],
                     },
                   ],
               }
         DefaultCacheBehavior:
           AllowedMethods: ['GET', 'HEAD', 'OPTIONS']
           CachedMethods: ['HEAD', 'GET']
           Compress: true
           ForwardedValues:
             QueryString: true
           DefaultTTL: 3600
           TargetOriginId: render
           ViewerProtocolPolicy: redirect-to-https
         CacheBehaviors:
           - AllowedMethods: ['GET', 'HEAD', 'OPTIONS']
             CachedMethods: ['HEAD', 'GET']
             Compress: true
             TargetOriginId: bucket
             ForwardedValues:
               QueryString: false
             ViewerProtocolPolicy: redirect-to-https
             PathPattern: /_next/static/*

Important to note here is that we have two origins. One to the AWS API Gateway which calls our lambda function. And the other to our S3 bucket. We have two cache behaviors - the default which targets our render origin (AWS Lambda) and the second which targets our bucket. The PathPattern is important here to determine that our static assets are in fact retrieved from the S3 bucket.

Closing thoughts

We have shown two examples of SSR serverless. One using plain React 18 and the other using NextJS 12+. Both currently require a fair amount of manual setup in order to get things deployed and working correctly.

  • We need to define a render function for both
  • We need to create a NextJS custom server in a lambda function
  • We need Output File Tracing provided by @vercel/nft in order to shrink the serverless artifacts so they are deployable. We have written our own plugin which wraps @vercel/nft during the creation of artifacts
  • We need esbuild to bundle our lambda function
  • We need an S3 bucket to store our static assets
  • We need CloudFront to forward requests to the AWS Lambda or static assets.

When we’re building and deploying our NextJS application we need to do the following steps:

  • Build nextjs using next build
  • Run serverless deploy to create resources (S3 Bucket, CloudFront) and our AWS Lambda
  • Prepare our static assets upload with a script to copy it to the correct location
  • Run serverless client deploy to upload the static assets to S3

There is quite a lot to this and potential future work. My thoughts were to create a separate serverless framework plugin specifically for deploying NextJS using the traditional serverless framework. I’d like to still keep this separate from Output File Tracing because I believe this plugin is still useful by itself. It is possible we might want to use this for other serverless functions to optimise their artifacts. There are also topics here not discussed. For example you might want to use Lambda@Edge instead. For us, we valued the ability to remove our stack more than the performance that is provided by Lambda@Edge. But I would like to make this configurable or down to a specific use case for a product.