What is Load Balancing?

April 6, 2021

=

Load balancing refers to dynamically distributing incoming requests across a group of backend servers, also referred to as a server farm or server pool.

High-traffic websites will serve hundreds of thousands, if not millions, of concurrent requests from users or clients and return the correct text, video, images, or application data, all in a timely and reliable manner. To cost-effectively scale to meet these high volumes, modern computing best practice generally requires the addition of more servers.

The best way to think of a load balancer is to imagine the load balancers is a " traffic cop" that sits in front of your servers and routing the client requests across all servers that are capable of meeting and fulfilling those requests in a manner that increases speed and capacity and ensures that no one server is overworked, as this would lead to degraded performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer will automatically start sending incoming requests to it.

In this manner, a load balancer goes through the following operations:

  • Distributes client requests or network load efficiently across multiple servers.
  • Ensures high availability and reliability by sending requests only to servers that are currently online.
  • Provide the flexibility to add or subtract servers as per the demand.

Benefits of Load Balancing

These are just some of the main benefits that you'll see when load balancing:

  • Reduced downtime
  • Scalable
  • Redundancy
  • Flexibility
  • Efficiency

Load Balancing Algorithims

Different load balancing algorithms provide different benefits; the choice of load balancing method depends on your requirements:

  • Round Robin - Incoming requests that are distributed across the group of servers sequentially.
  • Least Connections - A new request is sent to the server with the fewest current connections to clients. The relative computing capacity of each server is then factored into finding which one has the least connection.
  • Hash - Distributes requests based on a key you define, such as the client IP address or the request URL.
  • IP Hash - The IP address of the client is used to determine which server receives the request.
  • Random with Two Choices - Picks two servers from random, and sends the request to the one that is selected by then applying the Least Connections algorithm.

Summary

We hope that this short introduction to load balancing has helped you understand the basic principles and concepts of what load balancing is, and how it works.

Do you run an agency?
Snappy Host is designed for agencies with the latest and fastest hardware
Get started
popular tags
0333 090 4540
E-Centre, Darwin Drive, New Ollerton, Newark, England, NG22 9GW
© 2024 Snappy Host Ltd, a registered company in England and Wales, company no. 12659438, ICO registration no. ZB032228
chevron-down