Dockerize your JavaScript web applications efficiently

March 12, 2020 · 8 min read

So, you’ve built an amazing application with your favorite framework such as Vue.js or React.js and now it’s time to build a Docker image and ship it.

This blog post will iterate over the steps needed to prepare a production Docker image of your application, with some details and solutions to problems you might encounter on the way.

The Dockerfile

The first step is to create a Dockerfile serving as a definition of how the docker image is built.

In this first version, we extend the node:lts-alpine image; this ensures Node.js is available :

FROM node:lts-alpine  # install simple http server for serving static content RUN npm install -g http-server  # make the 'app' folder the current working directory WORKDIR /app  # copy both 'package.json' and 'package-lock.json' (if available) COPY package*.json ./  # install project dependencies RUN npm install  # copy project files and folders to the current working directory (i.e. 'app' folder) COPY . .  # build app for production with minification RUN npm run build  EXPOSE 8080 CMD [ "http-server", "dist" ] 

Then let’s build the image (here I used a blank Vue.js application built with vue create blog) :

docker build -t my-blog:latest .

The current Dockerfile is enough to create the image, and your application will run.

There is already one optimization in this file that you should notice, we first copy the package*.json, install the dependencies and then copy the rest of the application.

The main advantage is re-using Docker layers, as explained here.

However, there remain other problems :

  • an npm install might install other dependencies than what is specified in the lockfile
  • you will ship the full development environment (dev, node modules, ..), producing unnecessary large sized images
  • the webserver used is not suitable for production
  • a change in the location of the API URL variable used in your application will require a full re-build of the image

Let’s solve those problems together now!

NPM install vs NPM ci

From the NPM documentation :

npm ci is similar to npm install except it’s meant to be used in automated environments, such as continuous integration and deployments.

The main differences with npm install are :

  • The project must have an existing package-lock.json or npm-shrinkwrap.json.
  • If dependencies in the package lock do not match those in package.json, npm ci will exit with an error, instead of updating the package lock.
  • npm ci can only install entire projects at a time, individual dependencies cannot be added with this command.
  • If a node_modules is already present, it will be automatically removed before npm ci begins its install.
  • It will never write to package.json or any of the package-locks, installs are essentially frozen.

Updated Dockerfile :

FROM node:lts-alpine  # install simple http server for serving static content RUN npm install -g http-server  # make the 'app' folder the current working directory WORKDIR /app  # copy both 'package.json' and 'package-lock.json' (if available) COPY package*.json ./  # install project dependencies RUN npm ci  # copy project files and folders to the current working directory (i.e. 'app' folder) COPY . .  # build app for production with minification RUN npm run build  EXPOSE 8080 CMD [ "http-server", "dist" ] 

In short, the usage of npm ci is preferable when we need deterministic and repeatable builds, such as in our use case.

Multi-Stage Docker builds

Multi-Stage Docker builds will help us to create smaller Docker images, containing only the files needed for a production environment (the result of the npm run build command).

Analysing the image size of the first build, we can see that the image is 327MB !

ikwattro@mbp666 ~/d/g/b/v/blog> docker image ls | grep'blog' my-blog    latest    01166ba58cc5    2 minutes ago    327MB 

Simply put, in the same Dockerfile, we will use two stages – one for building the production files and a second that will copy the production files from the first stage and use it to create the final image.

Let’s see it in action with an updated Dockerfile :

# First stage, build the application FROM node:lts-alpine as build-stage WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build   # Second stage, copy the artifacts in a new stage and  # build the image FROM nginx:stable-alpine COPY --from=build-stage /app/dist /app/dist/ EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] 

Only the last stage will be shipped in the produced Docker image.

Let’s rebuild the image and check its size now :

ikwattro@mbp666 ~/d/g/b/v/blog> docker image ls | grep'blog' my-blog    latest    4dbb05834367    About a minute ago    77MB 

77MB! It’s a 300MB win! Why ?

Since the production artifacts of a single page application are just html, css and javascript files, we just need those and a webserver to serve the pages. We even don’t need NPM anymore. In the last step, we replaced http-server with NGINX since NPM and http-server are not in the final build.

Runtime variables in HTML, eh ?

A production build of any modern frontend application is nothing other than pure HTML, minimized JavaScript and CSS files. In such a scenario, there is no concept of environment variables ( eg: the location of the API server or the value of feature flag ) that we could change without having to rebuild the application.

No surprises that there are multiple solutions that can be used to circumvent this; we will present here two of them.

  • When launching the Docker image, we have to pass the location of the API url as a docker environment variable
  • We will use a custom NGINX configuration, that will serve a cookie named WEB_SERVER_CONFIG. Its value is a json string containing the environment variable
  • The Dockerfile command will use envsubst, a GNU program that performs environment variable substitution. It will replace the variable given when launching the container in the NGINX configuration file
  • The web application, when starting, reads the cookie and sets the API url given in the state of the application

First, let’s create the NGINX configuration file, let’s say in the docker/nginx/nginx-default.conf.template file :

server {   listen $PORT default_server;   listen [::]:$PORT default_server;    root /usr/share/nginx/html;   index index.html;   server_name web-server;    location / {     root /usr/share/nginx/html;     index index.html;     add_header "Set-Cookie" 'WEB_SERVER_CONFIG={"baseUrl":"$WEB_API_URL"};path=/';     try_files $uri $uri/ @rewrites;   }    location @rewrites {     rewrite ^(.+)$ /index.html last;   }    location ~* .(?:ico|css|js|gif|jpe?g|png)$ {     # Some basic cache-control for static files to be sent to the browser     expires max;     add_header Pragma public;     add_header Cache-Control "public, must-revalidate, proxy-revalidate";   } } 

When NGINX starts with this configuration, the server will produce a cookie named WEB_SERVER_CONFIG having for its value, a json string {"baseUrl":"$WEB_API_URL"}.

Let’s now make sure that when the image is started, the $WEB_API_URL variable is replaced by what is provided when launching the image. Only the last line of the Dockerfile has to be changed :

# First stage, build the application FROM node:lts-alpine as build-stage WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build   # production stage FROM nginx:stable-alpine as production-stage COPY --from=build-stage /app/dist /usr/share/nginx/html COPY ./docker/nginx/nginx-default.conf.template /etc/nginx/conf.d/default.conf.template RUN envsubst < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["sh", "-c", "envsubst '$$WEB_API_URL $$PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"] 

Now, we need to read it in our application. Here is one way of doing it:

constCONFIG_COOKIE_NAME='WEB_SERVER_CONFIG'// set default in case cookie does not exist, like in // development environmentsvarapiUrl=process.env.WEB_API_URL||'http://localhost:8080/'functiongetCookie(name){letcookie=document.cookie.split('').forEach(function(el){let[k,v]=el.split('='cookie[k.trim()]=v})returncookie[name}if (getCookie(CONFIG_COOKIE_NAME)!==undefined){config=JSON.parse(getCookie(CONFIG_COOKIE_NAME))apiUrl=config['baseUrl']}// pass now apiUrl where it is needed in your application

Run the container, and pass the api url dynamically :

docker run --it--name blog -eWEB_API_URL=http://my-server.io:8080 --rm my-blog:latest 

NGINX redirect and rewrites

Here, the frontend application will send all the requests to a predefined url, such as /api/***. The NGINX server, knowing where the API is, will rewrite any request starting with /api to the correct API url ( and also remove the /api section if needed).

The content of our nginx.conf file looks like this :

server {   listen $PORT default_server;   listen [::]:$PORT default_server;    root /usr/share/nginx/html;   index index.html;   server_name web-server;      location @backend {       rewrite ^/api(/.*) $1 break;       proxy_pass http://backend;       proxy_set_header X-Real-IP $remote_addr;       proxy_set_header Host $host;       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       # Following is necessary for Websocket support       proxy_http_version 1.1;       proxy_set_header Upgrade $http_upgrade;       proxy_set_header Connection "upgrade";  }    location / {     root /usr/share/nginx/html;     index index.html;     try_files $uri $uri/ @rewrites;   }    location @rewrites {     rewrite ^(.+)$ /index.html last;   }    location ~* .(?:ico|css|js|gif|jpe?g|png)$ {     # Some basic cache-control for static files to be sent to the browser     expires max;     add_header Pragma public;     add_header Cache-Control "public, must-revalidate, proxy-revalidate";   } }  

You can run the container in the same manner as before :

docker run --it--name blog -eWEB_API_URL=http://my-server.io:8080 --rm my-blog:latest 

That’s All Folks!

In this blog post we have learned some optimizations that are useful to build production Docker images of your Node.js web applications, from reducing image sizes to using runtime variables in production HTML/JS builds.

If you have more tips and tricks, I would be happy to learn them, please comment below!


Meet the authors