Network Traffic Management Techniques in Cloud Computing
Network Traffic Management:
Resource management of various resource instances between various users in a cloud computing environment as per user requirement is called network traffic management. Cloud computing is generally based on adaptive traffic management and control techniques.
In VDC, similar to CDC, network traffic should be controlled to optimize both accessibility and functioning of networked resources. Load balancing is the main aim of controlling network traffic. It is a skill to allocate workload across multiple virtual or physical machines and equivalent network connections to avoid overconsumption or underconsumption of these resources and to optimize functioning. It is offered by dedicated hardware or software.
Network Traffic Management Techniques:
In cloud computing, load balancing is a procedure which allocates the extra dynamic regional workload consistently across all the nodes. It is used to attain high user satisfaction and resource consumption ratio, making certain that every node is getting better functioning in the system. The network governing skills are as follows:
Technique-1: Balancing Client Workload – Hardware:
The consumer load balancing service is normally offered by committed hardware and software such as a router and a switch. Hardware-based load balancing uses a tool, like a physical router or switches to divide client traffic across multiple servers. The load balancing tool resides on the Internet and the server cluster. This facilitates all consumers to traverse the load-balancing tool. Consumers use the IP address as it abstracts the authentic IP addresses of all servers in a group.
The authentic IP addresses of the server are recognized by the load balancing tool that chooses where to send the request. Decision-making is normally managed by a load balancing policy like diagonally among servers. Weighted round-robin permits a manager to describe a functioning weight to every server. Servers with superior weight values get a greater fraction of links throughout the process of round-robin. The least links uphold a similar number of links to all servers.
Technique-2: Balancing Client Workload – Software:
Software-based consumer load balancing is executed by software operating on a virtual or physical machine. DNS server load balancing is a normal illustration. In a DNS server, multiple IP addresses may be configured for a given name. In this manner, a cluster of servers may be mapped to a domain name to a separate server IP address in a round-robin style. This facilitates consumers accessing similar domain names to distinguish IP addresses and thus remit requests to distinct servers.
Technique-3: Storm Control:
A storm arises as a result of the overflow of frames on a LAN or VLAN segment, developing unnecessary traffic and resulting in violated functioning of networks. A storm might arise with any cause such as a denial-of-service attack from the user side, fault in executing the protocol, or any sort of error in the network configuration.
Storm control is a skill to avert usual network traffic on a VLAN or LAN from being disturbed by a storm and hence improving network functioning. In case of storm control being facilitated on a supported LAN switch, it supervises all inbound frames to a switch port and ascertains whether the frames are broadcast, multicast or unicast. Then, it analyzes the entire number of frames of a particular kind delivered at a switch port over one second. Then, the switch evaluates the amount with a pre-configured storm control limit. The switch port delays the traffic when the limit is attained and separates succeeding frames over the next period. The port will be out of the jamming condition if the traffic flows below the limit. Storm control limit can also be set as a fraction of the port bandwidth. It can cause uneven distribution of storage etc.
Technique-4: NIC Teaming:
NIC teaming is a skill which rationally groups physical NICs linked to a virtual switch. The skill balances the traffic load across all or several physical NICs and offers failover in the case of a NIC failure or a network connection outage. NICs within a team may be configured as active and standby. Active NICs are used to forward frames, While standby NICs stay inactive. Load balancing permits the allocation of all outbound network traffic across active physical NICs, providing superior throughput than a solo NIC might offer.
A standby NIC will not be used for dispatching traffic unless a failure happens on one of the active NICs. In the event of NIC or connection failure, traffic from the failed connection will fail over to another physical NIC. Failover and load balancing across NIC team members are managed as per the protocol of a virtual switch.
Technique-5: Limit and Share:
Share and limit are limitations which are used to manage various kinds of outbound network traffic such as management, IP storage, VM, and VM immigration, when these traffic kinds contend for the NIC or physical NIC team. They share and limit the enhanced levels of service for vital applications. They avoid input-output vital applications’ workload from being slowed down by less vital applications. ‘Limit’ just like its name, sets a limit on the highest bandwidth for traffic between NIC teams. The price is normally stipulated in Mbps.
‘Shares’ stipulate the relative priority for allocating bandwidth to different kinds of traffic when different types of traffic contend for a particular physical NIC. Share certifies that each outbound traffic kind gets its share of physical NIC based on its priority. Share are specified as numbers.
Technique-6: Traffic Shaping:
Traffic management managers network bandwidth so that business-crucial applications have the needed bandwidth to certify the quality of service. Three limitations used for traffic management to choke and mould the network traffic flow are standard bandwidth, peak bandwidth and burst size.
A standard bandwidth is configured to set the permitted data move rate across the allocated virtual switch or a port cluster over time. As this is a standard amount over time, the workload at a virtual switch may extend beyond the standard bandwidth for a lesser period. The value proposed for the peak bandwidth decides the highest data shift taste sanctions across a virtual switch, assigned virtual switch, or a port cluster without dropping or rowing frames.