Load Balancers for High Availability
Load balancing spreads the processing load over multiple servers to ensure availability when
the processing load increases. Many web-based applications use load balancing for higher
availability
A load balancer can optimize and distribute data loads across multiple computers or multiple
networks. For example, if an organization hosts a popular web site, it can use multiple servers hosting
the same web site in a web farm. Load-balancing software distributes traffic equally among all the
servers in the web farm.
The term load balancer makes it sound like it’s a piece of hardware, but a load balancer can be
hardware or software. A hardware-based load balancer accepts traffic and directs it to servers based
on factors such as processor utilization and the number of current connections to the server. A
software-based load balancer uses software running on each of the servers in the load-balanced
cluster to balance the load.
Load balancing primarily provides scalability, but it also contributes to high availability.
Scalability refers to the ability of a service to serve more clients without any decrease in
performance. Availability ensures that systems are up and operational when needed. By spreading the
load among multiple systems, it ensures that individual systems are not overloaded, increasing
overall availability.
Consider a web server that can serve 100 clients per minute, but if more than 100 clients connect
at a time, performance degrades. You need to either scale up or scale out to serve more clients. You
scale the server up by adding additional resources, such as processors and memory, and you scale out
by adding additional servers in a load balancer.
Figure 9.2 shows an example of a load balancer with multiple web servers. Each web server
includes the same web application. Some load balancers simply send new clients to the servers in a
round-robin fashion. The load balancer sends the first client to Server 1, the second client to Server
2, and so on. Other load balancers automatically detect the load on individual servers and send new
clients to the least used server.
Figure 9.2: Load balancing
An added benefit of many load balancers is that they can detect when a server fails. If a server
stops responding, the load-balancing software no longer sends clients to this server. This contributes
to overall high availability for the load balancer.
When servers are load balanced, it’s called a load-balanced cluster, but it is not the same as a
failover cluster. A failover cluster provides high availability by ensuring another node can pick up the
load for a failed node. A load-balanced cluster provides high availability by sharing the load among
multiple servers. When systems must share the same data storage, a failover cluster is appropriate.
However, when the systems don’t need to share the same storage, a load-balancing solution is more
appropriate, and less expensive. Also, it’s relatively easy to add additional servers to a loadbalancing
solution.
Remember this
Failover clusters are one method of server redundancy and they provide high
availability for servers. They can remove a server as a single point of failure.
Load balancing increases the overall processing power of a service by
sharing the load among multiple servers. Load balancers also ensure
availability when a service has an increased number of requests.
Source: Darril Gibson Book Sec +
Load balancing spreads the processing load over multiple servers to ensure availability when
the processing load increases. Many web-based applications use load balancing for higher
availability
A load balancer can optimize and distribute data loads across multiple computers or multiple
networks. For example, if an organization hosts a popular web site, it can use multiple servers hosting
the same web site in a web farm. Load-balancing software distributes traffic equally among all the
servers in the web farm.
The term load balancer makes it sound like it’s a piece of hardware, but a load balancer can be
hardware or software. A hardware-based load balancer accepts traffic and directs it to servers based
on factors such as processor utilization and the number of current connections to the server. A
software-based load balancer uses software running on each of the servers in the load-balanced
cluster to balance the load.
Load balancing primarily provides scalability, but it also contributes to high availability.
Scalability refers to the ability of a service to serve more clients without any decrease in
performance. Availability ensures that systems are up and operational when needed. By spreading the
load among multiple systems, it ensures that individual systems are not overloaded, increasing
overall availability.
Consider a web server that can serve 100 clients per minute, but if more than 100 clients connect
at a time, performance degrades. You need to either scale up or scale out to serve more clients. You
scale the server up by adding additional resources, such as processors and memory, and you scale out
by adding additional servers in a load balancer.
Figure 9.2 shows an example of a load balancer with multiple web servers. Each web server
includes the same web application. Some load balancers simply send new clients to the servers in a
round-robin fashion. The load balancer sends the first client to Server 1, the second client to Server
2, and so on. Other load balancers automatically detect the load on individual servers and send new
clients to the least used server.
Figure 9.2: Load balancing
An added benefit of many load balancers is that they can detect when a server fails. If a server
stops responding, the load-balancing software no longer sends clients to this server. This contributes
to overall high availability for the load balancer.
When servers are load balanced, it’s called a load-balanced cluster, but it is not the same as a
failover cluster. A failover cluster provides high availability by ensuring another node can pick up the
load for a failed node. A load-balanced cluster provides high availability by sharing the load among
multiple servers. When systems must share the same data storage, a failover cluster is appropriate.
However, when the systems don’t need to share the same storage, a load-balancing solution is more
appropriate, and less expensive. Also, it’s relatively easy to add additional servers to a loadbalancing
solution.
Remember this
Failover clusters are one method of server redundancy and they provide high
availability for servers. They can remove a server as a single point of failure.
Load balancing increases the overall processing power of a service by
sharing the load among multiple servers. Load balancers also ensure
availability when a service has an increased number of requests.
Source: Darril Gibson Book Sec +
No comments:
Post a Comment