HIDE

Grid

GRID_STYLE

Classic Header

{fbt_classic_header}

Header Ads

Breaking News

latest

AWS Load Balancer

AWS Load Balancer ·        A load balancer accepts incoming traffic from clients and routes requests to EC2 instances (targets). ·      The ...

AWS Load Balancer

AWS Load Balancer

·       A load balancer accepts incoming traffic from clients and routes requests to EC2 instances (targets).

·    The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets. When the load balancer detects an unhealthy target, it stops

routing traffic to that target and then resumes routing traffic to that target when it detects that the target is healthy again.

LB Types: (APP LB, Network LB, Classic LB)

·       An Internet facing load balancer has a publicly resolvable DNS name.

·       Domain names for content on the EC2 instances served by the ELB is resolved by the Internet DNS server to the ELB DNS name (and hence IP address).This is how traffic from the internet is directed to the ELB front end.

·       Classic Load Balancer services support http, https, TCP, SSL. It supports IPv4, IPv6 and dual stack.

·       App LB distributes incoming app traffic across multiple targets such as EC2 instances in multiple AZ. This increases the availability of our application.

·       Network LB has ability to handle volatile workloads and scale to millions of requests per second.



      ELB Listener:

·       An ELB listener is the process that checks for connection request.

·       We can configure the protocol/port number on which our ELB listener listen for connection request. Frontend listeners check for traffic from client to the listener and backend listeners are configured with protocol/port to check for traffic from the ELB to the EC2 instances.

·       It may take some time for the registration of the EC2 instances under the ELB to complete.

·       Registered EC2 instances are those that are defined under the ELB

·       ELB has nothing to do with the outbound traffic that is initiated/generated from the registered EC2 instances destined to the internet or to any other instances within the VPC.

·       ELB only has to do with inbound traffic destined to the EC2 registered instances (as the destination) and the respective return traffic.

·       We start to be charged hourly (also for partial hours) once our ELB is active. If we do not want to be charged as we do not need ELB anymore, we can delete it. Before we delete the ELB, it is recommended that we point the Route 53 to somewhere else other than the ELB.

·       Deleting the ELB does not affect or delete the EC2 instance registered with it.

·       ELB forwards traffic to eth0 of our registered instance and in case the EC2 registered instances has multiple IP address on eth0, ELB will route the traffic to its primary IP address.

       How LB finds Unhealthy Instances:

·       ELB supports IPv4 address only in VPC hence to ensure that the ELB service can scale ELB nodes in each AZ, ensure that the subnet defined for the LB is at least /27 in a size, and has at least 8 available IP addresses for the ELB nodes a use to scale.

·       For fault tolerance, it is recommended that we distribute our registered EC2 instances across multiple AZ, within the VPC region. If possible, try to allocate same number of registered instances in each AZ.

·       The LB also monitors the health of its registered instances and ensure that it routes traffic only to healthy instances. A healthy instance shows as ‘healthy’ under ELB.

·       When the ELB detects an unhealthy instance, it stops routing traffic to the instance. An unhealthy instance shows ‘unhealthy’ under the ELB.

·       By default, AWS console uses ping http (port80) for health check.

·       Registered instances must respond with a “http 200 Ok” message within the timeout period else it will be considered as Unhealthy.

·       AWS API uses ping TCP (port 80) for health check.

·       Response timeout is 5 sec (range is 2-60sec).

 Health check interval: Period between health checks -default 30 sec (range from 5 to 300 sec).

Unhealthy Threshold: Number of consecutive failed health check that should occur before the instance is declared unhealthy (range is 2-10) -Default 2.

Healthy Threshold: Number of consecutive successful health check that must occur before the instance considered unhealthy (range 2-10)-Default -10.

Target Group:

·       They are logical grouping of targets behind the load balancer.

·       TG can exist independently from the load balancer.

·       TG can be associated with an Auto Scaling group (ASG).

·       TG can contain up to 200 Targets.

LAB – How to setup load balancer:

Step 1: Create a Linux machine and launch and access the machine (Steps above)

Step 2: Run the commands to install web package.

sudo su

yum update -y

yum install httpd -y

cd    /var/www/html

echo "MyGoogle-1" > index.html

service httpd start

chkconfig httpd on

Step 3: Step 5: Access the webserver by using public IP

Step 4: Launch one more Linux Machine and install Web package.

Step 5: Create load balancers

Search for load balancer - select classic load balancer

Load Balancer Name <Put the name> - Next - select existing security group – Next Step 4:

Response Timeout - 2 Seconds

Interval - 5 Seconds

Unhealthy threshold - 2

Healthy threshold - 2

Next - Attach both the instances - Next - Next – Create

Step 6: Access the load balance by using DNS  

Step 10: If one server is down, it should redirect the traffic to another server.

1) we can stop/terminate the machine

2) remove the file / rename the file

Let us remove index.html file in machine1

Go to putty

ls (To see the list of files)

# rm index.html

Now, access the load balancer, traffic should be redirected to 2nd server.

How can we know, which instance is down?

Go to load balances - instances tab and there we can see the status is Out of Service.

Now Let us recreate index.html file

# echo “Google-1" > index.html

Now load balance will start sending the traffic to 1st Server.


No comments