Nginx and Load Balancing

In this article we will look into setting up a load balancer using Nginx.

Nginx is an open source web server and reverse proxy that is quite frequently used for load balancing. We will also use simple-web, which is a simple web server that outputs IP address of source and destination, which makes it easier for testing load balancer. The best part is, you also have docker images available for simple-web

We will start by adding instructions in our docker compose for setting up the simple-web containers.

version : '3.9'
services: 
  webapp:
    image: yeasy/simple-web:latest
    ports:
      - "8080-8085:80"
    networks:
      - mynetwork
networks:
  mynetwork:      

We can scale the webapp service using the scale parameter of docker compose up. For example,

docker compose up --scale webapp=5 -d

The above instruction would execute the instructions in docker-compose.yaml. It would also scale the webapp service in the docker-compose.yaml with 5 instances as specified in the docker compose command above.

Before we add instruction in docker-compose.yaml for setting up Nginx, we need to create the configuration file that would be used by the Nginx instance. Let us proceed with the same.

events{

}
http{
upstream loadbalancer {
	server host.docker.internal:8080;
	server host.docker.internal:8081;
	server host.docker.internal:8082;
	server host.docker.internal:8083;
	server host.docker.internal:8084;
}
server {
	listen 80;
	location / 	{
		proxy_pass http://loadbalancer/;
	}}

}

Let us look into the configurations a bit closer. The upstream part configures the load balancer servers. In the above example, 5 servers are configured. The server configures the proxy url for accessing the server.

Note that we haven’t explicitly specified the behavior of the load balancer. By default, the nginx server would use the round robin method for load balancing. You could favour certain servers by using the weight property. For example,

upstream loadbalancer {
	server host.docker.internal:8080 weight=2;
	server host.docker.internal:8081;
	server host.docker.internal:8082;
	server host.docker.internal:8083;
	server host.docker.internal:8084;
}

As per the above configurations, the first server would recieve twice the hits than rest of the servers. You could not access the simple-web installation to test the load balancing and how a different IP is allocated on different requests (based on weight).

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s