Dockerizing Your MERN Stack App: A Step-by-Step Guide
Are you tired of spending hours messing with crontabs and installing packages in an attempt to run your app locally? Are you sick of always missing a dependency that doesn't allow you to run the app and therefore you have to debug it for hours trying to find what's wrong? Then you've come to the right place. In this article, you will learn how to make use of Docker to develop and ship your software faster and easier.
Let's Get Started
For me, the easiest way to understand a technology is to know the problem it solves. So why should we care? Why do we need Docker? Aren't we doing well with our application? (Productivity app). It is fully operational, we have deployed it, and it is accessible over the internet. I don't see any issues here - what do you think? 🤔
Your developer friend now wants to add more features to it. So he/she clones your GitHub repository and attempts to install it locally on his machine. Assume you are using Windows and your friend is using Linux; your machine has the 16th version of Node and your friend, for some reason, is using an older version of Node. Your project is fully compatible with your system and is operational. It could work on your friend's computer or not. Unfortunately, it did not work due to an outdated node version. Your friend is unaware of this and spends a significant amount of time attempting to find out what went wrong. He calls and says your project isn't working, yet it is at your end. Finally, you two fought and realized what went wrong. But you can't get that lost time back 😔.
Fortunately, there is a simple solution to this issue. What if I told you that you could give your computer to that friend without actually handing it to them? You can send your friend a magic file with configurations that worked on your system. Then, using a tool, your friend can unlock the power of this magical file and successfully run your project on his PC.
This tool is called Docker, and the magic file is called Dockerfile, which we will discuss later in this article.
What is Docker?
Docker is a containerization solution that allows you to package your entire software and run it from anywhere. In a file called Dockerfile, developers define project dependencies, instructions, and other variables. We can create an image and then a container using that Dockerfile.
Three main components of Docker
Let's learn some important basic components of docker.
A Docker image is a sealed package that contains the source code files for your project. It is immutable, which means that once built, the files in that image cannot be altered. You must rebuild the image to update files. These images are created using instructions written in Dockerfile, which we will learn more about later in this tutorial.
Images are read-only which means you can't run them. To run applications, we use containers. Containers are runnable instances of images.
We have two problems here,
To begin with, data saved in containers will be lost once they are stopped. But, there may be times when we need data to persist and be shared between containers.
Secondly, we are aware that docker images are read-only. However, in development, we frequently edit files. Every time we make a modification, we must rebuild the image. This will significantly reduce developer productivity and is a time-consuming process.
We can use volumes to overcome these problems. Docker volumes are a way to store data outside of a container's filesystem. They allow data to persist even if the container is deleted, and can also be used to share data between multiple containers.
Virtualization Vs Containerization
Before Docker, developers did this with Virtual Machines.
Virtualization involves creating a virtual version of a physical machine, including the operating system, on top of a host operating system. This allows multiple virtual machines to run on the same physical hardware, each with its operating system and resources. Examples of virtualization software include VMware and VirtualBox.
Containerization, on the other hand, involves packaging an application and its dependencies together in a container. Containers are lightweight and fast, and they share the host operating system kernel, making them more efficient than virtual machines. Example of containerization software - Docker.
But because virtual machines utilize a lot of resources and often time consumes developers started using Docker.
Some Important Terms
A few terms you need to need to know before we start using Docker.
Docker daemon is similar to your brain (not exactly like a brain). It handles API requests as well as the management of Docker containers, images, volumes, and networks.
Docker Hub is a public registry where docker images can be found. While creating images, you can push them to Docker Hub so that others can use them.
Docker CLI is a tool that may be used to create/delete images, run/stop/kill containers, create/delete volumes, pull images from the Docker registry, and much more.
- Download Docker Desktop
- Get source code from here if haven't followed previous tutorials in this series.
- Read previous articles in this series so that you won't get confused.
Dockerizing React Application
Let's start by learning more about Dockerfile. A Dockerfile is a set of instructions for creating a Docker image. Imagine every instruction as a layer.
In the root of the
/client folder, create a file named
Dockerfile without any extension.
This is what a basic Dockerfile for React application consists of:
- Layer 1 - Begin by specifying a base image. We already know that React makes use of NodeJs. Consider a container to be a brand-new computer. To run the React app on that Computer, you must first install node. (node uses alpine (a Linux distribution) as its base image.)
- Layer 2 - Docker creates a folder called
appin that machine after installing node so that docker will utilize that folder to follow the rest of the instructions. We can alternatively specify the root directory (
/), however, this will cause problems with docker-generated files.
- Layer 3 - Next, docker copies the
package.jsonfile to the
💡Big Brain Time: Why not copy all the files? This is a time-saving technique used by developers. There is a concept known as Layer caching where Docker uses a cached layer when rebuilding the image if that layer does not change. If you copy all of the files before installing dependencies, Docker will re-install dependencies whenever you make a modification to your files and rebuild the image.
- Layer 4 - Install the dependencies specified in
package.jsonby running the command
- Layer 5 - Copy the rest of the files.
- Layer 6 - Tell docker that the container will listen on port
- Layer 7 - Tell Docker the command to execute after the container has started. We can't put
RUN npm starthere since the image can't run the app because it's read-only.
When we copy files to Docker, we don't want it to copy unnecessary files like
README.md or large folders like
node_modules. So we can create a file called
.dockerignore and define which files you don't want to be copied.
It is now time to build the image. Run the following command in the terminal.
- As you can see above our image is ready.
-tmeans tag. We can give our image a tag to later use as a reference to run the container and delete the image.
.indicates the directory that contains Dockerfile.
Now you can push this image to the docker hub and let others use it.
Okay, let's see how to run this image to create a container that will run our application.
You can run this image with the following command,
--nametag allows us to specify a name for our container.
-ptag allows us to map the container port
3000to our local computer port, allowing us to view the application in our local browser. (Remember that each container is a brand new computer, thus we need to tell Docker to map that port to our local PC port.)
-dhelps run the container in detached mode means the container will run in the background, so we can use the terminal to do other tasks.
- See, the container has been created and running on port
3000. You can click on that container and see the logs also.
I previously mentioned that because images are read-only, we must rebuild them whenever we make changes to source code files or install new dependencies and want to see live changes. However, this is a time-consuming procedure. Fortunately, there is a solution. We can use Volumes to store data (files, etc.) permanently and map the volumes to containers. As a result, any changes we make will be reflected in the container.
Now the command becomes:
This command grew in length. But don't worry, I'll show you how to put this command into a single file and run the container with a simple command later in this article.
Hot reload for React apps in docker
Dockerizing NodeJs Application
In the same way, try to dockerize the NodeJs app.
Since React depends on NodeJs and this is a NodeJs application Dockerfile will be almost similar.
Before you build the image you need to make a small change in
package.json. Update the dev script. (Why?)
Build the image.
Run the image to build the container.
- See, the server app running.
We currently have two applications that must be managed separately. If your friend wants to try your project, even with Docker, he or she will have to set up two applications, build Docker images, and do everything separately. Furthermore, the commands are lengthy. What if you could start your entire project with a single, short command? 🤯
Let me introduce you to Docker Compose. You can use docker-compose to define and execute numerous containers with a single command. To accomplish this, we would first create a YAML file in which we configure all of the services.
Let's do this
Combine the folders
Because we define both our
server apps in a single YAML file, we must place both app directories in the same folder. So make a folder for the project and place the
server directories in it.
Create a file called
docker-compose.yml in the project folder. This is the folder structure:
What goes in this file?
- I left comments as an explanation.
Pushing to Docker Hub
If you want people to be able to use your image, you may put docker images on docker hub and let them get it from there. Or if you are building something that can be used as base image for other projects.
- Go to dockerhub and sign up/log in to your account.
- Click on Create a Repository.
- Give your repo a name, and description set private or public, and then click Create.
- Open the repo and terminal. Delete previous images.
- Build new image with name
<hub-username>/<repo-name>[:<tag>]. Tag is optional here, however, if you are pushing multiple images to the same repo, include one. (⚠️ Make sure you are in the correct directory before running the command.)
Do the same for the server image.
- If this is your first time pushing, make sure you're logged in to Docker.
- Push the image as the last step.
- Now, if you go to your repository on the docker hub, you will see your images.
- Since these images are public, anyone can pull them and use them.
I merely scratched the tip of the iceberg. These resources can help you learn more about Docker:
- Docker Crash Course playlist by The Net Ninja.
- Docker Tutorial for Beginners by Kunal Kushwaha. (See this video for Docker theory.)
What do you think about Docker? Leave a comment.
I hope you understand why we need Docker and how to use it. Subscribe to the newsletter for more stuff like this.
LEAVE A COMMENT OR START A DISCUSSION
5 min read
Hiring Angular Developers in 2023: Insider Tips & Strategies
Does your company struggle to create a modern, high-performance, cross-platform web app? Do you desire cost-effective development without compromising future software quality? Many web development teams utilize Angular to build complicated single-page apps. Hiring Angular programmers is difficult.
8 min read
How Feature Flags Can Help You Ship Faster and Smarter?
Are you tired of long development cycles and hesitant to push new features to production? Feature flags may be the solution you're looking for. In this blog post, we'll explore how feature flags can streamline your development process, reduce the risk of errors, and give you more control over the features you release to your users. From testing new features to rolling out changes to a select group of users, feature flags can help you do it all. Keep reading to learn how you can start using feature flags in your development workflow today.