Pune Developer Community – Docker How To’s: Part 2

2. Best Practices for Deploying Production-Level Web Services using Docker

Tested Infrastructure

PlatformNumber of InstanceReading Time
Play with Docker110 min


  • Create an account with DockerHub
  • Open PWD Platform on your browser
  • AWS + EC2 instance (using ubuntu)+ or any linux base instance
  • Click on Add New Instance on the left side of the screen to bring up Alpine OS instance on the right side

in this artical i’m going to show you the better way to deploy our production web services,in basically we are going to see technique of running multiple production container using docker.
when you do dockering in day to day work than you can come across with docker compose really docker is magical tool !!
We are gifted with tools in this modern era and we should utilize them to deliver services seamlessly.

Traditional approach (Know the existing things)


In old approach, these pieces are installed on a VPS.

1.Application Server (Node JS, Java or Python)

2.Proxy Server (Apache, Nginx)

3.Cache Server (Redis, Memcached)

4.Database Server(MySQL, PostgreSQL and MongoDB etc)

in the old approad not preffered bacause of cutomation is taking place everyone using CI/CD deployemnet.we can also capture a snapshot of given eniroment to reduce risk into wrong set of condition deploying services.

according to microservice, the tightly coupled logiic and deploy them separately. its means in above diagram the every application server in more independent and talk via HTTP or RPC. but its doesn’t mean you need to choose X number of VPS instane to run services.

but container provide nice way to simulate and and isolation feature within same machine or server.its era of containerization
If you wrote a service and planning to deploy it on AWS EC2 or any cloud VPS, don’t deploy your stuff as a single big chunk. Instead, run that distributed code in the containers. We are going to see how to containerize our deployment using Docker and Docker Compose.

lets see in practical.

Step 1- Install on Ubuntu AMI instance + Docker

1.We need an AWS account (http://aws.amazon.com/).
2.Choose EC2 from Amazon Web Services Console.

3.On the Choose an Amazon Machine Image (AMI) menu on the AWS Console. Click the Select button for a 64Bit Ubuntu image. (i.e. Ubuntu Server 14.04 LTS)

4.For testing we can use the default (possibly free) t2.micro instance (more info on pricing).

5.Click the Next: Configure Instance Details button at the bottom right.

6.On the Configure Instance Details step, expand the Advanced Details section.

7.Under User data, select As text.

8.Enter #include https://get.docker.com into the instance User Data. CloudInit is part of the Ubuntu image we chose; it will bootstrap Docker by running the shell script located at this URL.
9.We may need to set up our Security Group to allow SSH. By default all incoming ports to our new instance will be blocked by the AWS Security Group, so we might just get timeouts when we try to connect.

10.Creating a new key pair:

11.After a few more standard choices where defaults are probably ok, our AWS Ubuntu instance with Docker should be running!

12.Installing with get.docker.com (as above) will create a service named lxc-docker. It will also set up a docker group and we may want to add the ubuntu user to it so that we don’t have to use sudo for every Docker command.

Step 2 – Run webservice in production

1.connect to ubuntu intance using SSH
git clone https://github.com/sangam14/web_services.git
1, clone the repository
2.check the docker compose is intalled or not
4.Change directory to webservices as shown below & Bringing up app using Docker Compose
provide Public IP of the instance
final output

Anatomy of webservice

Steps to follow:

Clone the Repository:

git clone https://github.com/sangam14/web_services.git

Change directory to webservices as shown below:

cd webservices 

Bringing up app using Docker Compose:

docker-compose up 

for PWD click on port you will get health check page

health check by curl

$ curl http://localhost/api/v1/healthcheck

Important thing

If you see, we are creating a simple express service with a health check endpoint.


check the nginx configuration file.

upstream service { 
    server app:8080;

nginx and app both are bridged using mynetwork, one can access another by the service name. So DNS is already taken care by docker. If this privilege is not available, we need to hard code IP in Nginx configuration file or assign a static IP from the subnet in the docker-compose.yam file. This is a wonderful thing about docker networking.

version: "2"
        build: ./nginx
          - "80:80"
          - mynetwork
        build: ./app
          - mynetwork
          - 8080
        driver: bridge

By default, all the containers we create will fall under the same Internal IP range(Subnet). Docker networking allows us to create custom networks with additional properties like automatic DNS resolution etc.

In the above YAML file, we are creating a network called mynetwork. The services(containers) app and nginx will lie in the same subnet and can communicate to each other without the need of exposing the web service container to the outside world. In this way, we can make a single entry point to our web service that is through the Nginx service. If anyone tries to access app service directly they cannot do it because it is hidden. This actually secures our application.


Announced winner of Docker Quiz

winner of Docker Quiz Rajkumar Yadav ( docker Swags)

Leave a Reply

Your email address will not be published. Required fields are marked *