Load Balancing – Design And Implementation
Introduction
Nobody would deny that computer technologies have penetrated nearly all spheres of human activity. Nowadays, it is an independent industry, which develops constantly. Therefore, it strives for new innovations and reasonable solutions. As a consequence, stability of network trafficking is one of the central issues in empirical perspective of computer technologies. Thus, a technology, which regulates the stability of networks has emerged. It is commonly known as load balancing. Thus, it is to be outlined that the following paper lingers upon the suggestion of new design and implementation of load balancer.
To be more exact, the following paper defines the term of load balancing. Then, it accumulates the recent findings regarding design and implementation of load balancing. In consequence, the study justifies the suggested solutions, as well. Further, the paper gives an account to technical details. Still, it is essential to verify a hypothetical solution even though the study provides a reasonable motivation. Hence, the paper conducts an experiment, which processes real and synthetic data. Eventually, it discusses the results and makes the final decision concerning the suggested solutions. As the thesis and the layout of the paper have been outlined, it is necessary to proceed to the next section.
Definition
To start with, it is important to define the term of load balancing. Doubtless, there is no consensus about its definition so that in this study, the term is described as a process and technology, which distributes traffic across numerous serves by usage of network-based devices [2]. In fact, this definition is the most applicable to the following study. Speaking about the basic mechanism of load balancing, it simply perceives traffic from a particular website and leads it to various servers. It is quite a convenient technology because hundreds of servers can operate under the same URL. What is more, this technology presupposes certain related hardware and software such as multilayer switch and Domain Name System.
In particular, load balancing serves the following functions. It perceives net-based traffic, which is directed to a certain website. Then, it divides traffic into independent requests and chooses servers, which will receive separate requests. Subsequently, load balancing ensures servers’ operating. Otherwise, it excludes them from rotation. By the same token, it employs numerous units in case of fail-over scenario. Eventually, load balancing reads URLs, intercepting cookies, and XML parsing. In respect to the significance of this technology, it is increasingly apparent that load balancing optimizes use of resources, maximizes throughput, diminishes response time, and prevents any single unit from being overloaded. In addition, usage of load balancing together with other components increases reliability [11].
Actually, load balancing origins from DNS-based round robin. Needless to say, it has become a central issue for the development of the entire technology, which still utilizes the basic principles of DNS administrating. The further evolution of this technology has demonstrated a considerable increase in flexibility, availability, and scalability. Though, a wide range of alternative technologies have emerged: firewall load balancing, global server balancing, clustering, and crossover technology [2]. As the term of load balancing has been defined, it is pivotal to touch upon the related findings regarding this technology.
Related Trends and Findings
Regarding the latest findings and trends in design and implementation of load balancing, it is important to pay attention to Xeon Phi based optimization. Actually, it includes numerous techniques, but the study has chosen the most significant ones. First of all, Xeon Phi provides a so-called timing application. To be more specific, this application coordinates and optimizes real time, user time, and system time. Real time is an average wall clock, which can be traced with stopwatch. As for user time, it is a period of performance of user mode code. System time is a period of a certain device responding the user code. Secondly, Xeon Phi based application aligns outbound and inbound data. Namely, it generates an improved vector for every single set of data with 64-byte boundary. As a result, data moves throughout a looped pattern because compiler does not create any prologues [10]. Thus, Xeon Phi minimizes redundancy of traffic.
In the same vein, Oliveira et al admit the significance of Load Balancing Advisor. It is an authorized program, which provides sensitive information concerning availability, health, and performance of a particular server. Though, it is crucial to encrypt these data in a reliable way [9]. Therefore, TSL/SSL technologies implement such configurations, secure, and control the access to all communications with Load Balancing Advisor. In such a way, it is possible to create custom client certificates, amend configurations statements, and settle various levels of accessibility for different URLs. Generally speaking, these technologies are particularly focused on the security considerations of a network and every single server independently. Still, reliable security underpins a stable performance of a certain network.
Client says about us
As energy consumption is increasingly becoming one of the central concerns, the following element of load balancing design suggests a technology, which saves the consumption of power. To be more exact, it is a power-proportional cluster, which consists of a power-aware manager of clusters and set of heterogeneous machines [7]. In fact, this design utilizes power efficiency of currently available hardware. At the same time, transitioning mechanisms switch active, low-power, and sleep states of a certain cluster. In other words, this technology schedules the periods of dynamic powering so that the consumption is considerably lower. In such a way, the researchers reduced power consumption by 63% by means of dynamic provisioning algorithms. What is more, the forecasts suggest that this technology will reduce the consumption by 90%, with all necessary devices of data center working in their standard regime. Actually, the researchers have developed only a prototype, which successfully runs Wikipedia.
Eventually, it is increasingly difficult to ignore the influence of Cloud computing on the contemporary field of computer technologies so that the recent innovations for load balancing within cloud-based environment also contribute to the overall state of the technology. One of such innovations is Cloud Sim. It is a simulation toolkit, which generates, simulates, and extends Cloud computing systems and environments of application provisioning. In such a way, Cloud Sim toolkit supports the system and behavior of such load balancing components as data center, virtual machines, and policies of resource provisioning. The toolkit implements techniques, which generically extend provisioning with minimal effort [1]. In other words, Cloud Sim toolkit simplifies the provisioning mechanism of Cloud environment. Needless to say, it is a significant finding because Cloud-based networks are usually complicated and require much effort, skills, and costs. Having outlined the most recent findings regarding design and implementation of load balancing, it is essential to justify the orientation at them.
Motivation
Taking all these findings into consideration, it is to be said that the motivation for the following solution addresses such considerations as rationality, security, economy of power, and Cloud computing orientation. In such a way, rationality implies the optimization of time use and data segmentation. Therefore, the study suggests Xeon Phi technologies because they are particularly focused on time reduction and alignment of data streams. Obviously, reduced time and reasonably arranged traffic routes will diminish the efforts of load balancing for regulation of traffic within a certain network.
With regard to security, the study has decided that the best protection provides a user on his or her own. In other words, the study has suggested Load Balancing Advisor, which is implemented with TSL/SSL. In fact, Load Balancing Advisor protects data even though they are quite vulnerable. Still, the study assumes that personal awareness of potential network dangers is a matter of professionalism. Hence, the professionalism of network operators is a priori in this case. All in all, Load Balancing Advisor will ensure the security of a particular network on TSL/SSL platform because it enables operators to configure a protection layer on their own.
In reference to the reduction of energy consumption, the study refers to the original statement of the problem. Krioukov et al suggest that lack of power makes load balancing devices work less efficient than they potentially can. Therefore, it leads to the limitations of traffic balancing. Having considered this point, the researchers assume that diminished consumption of energy will boost the performance of the entire network drastically [7]. To the broadest extent, as regards all of these considerations, it is worth saying that the technology of load balancing should strive for more integration with cloud-based environments. Actually, the study considers Cloud computing to be a dominant information technology in the nearest future. For this reason, close integration with optimized load balancing tool will simplify cloud-based technologies drastically. As a consequence, load balancing will move on a new technological level.
Technical Details
Speaking about technical requirements for the suggested design and implementation of load balancing, it is necessary to pay attention to the general engine of a potential technology. Doubtless, this engine should coincide with the most recent applications. Therefore, FTP protocols have to serve this engine because it obtains more complex capacities. To be more specific, it possesses a content-switched architecture so that it is flexible initially. As a result, it combines several conjunctions, which is the main technical requirement for the suggested solution. Also, FTP protocoling serves both layers: user and system [6]. Taking these points into account, it should be admitted that the ability to concentrate certain functions in one configuration is the basic requirement for the outlined design and implementation of load balancers. One may argue that such combination is not reasonable and furthermore, not feasible enough.
Conversely, with the reference to the previous point, this study emphasizes the need for a widened capacity of FTP protocoling. First of all, it can be explained by relatively high demand for resources for serving the primary functions of the suggested engine. Second, the conjunctions have to be reasonably flexible though: an operator needs to amend parameters independently, without affecting the performance of the entire load balancer. Third, the suggested load balancer is supposed to be intelligent enough, as well. To be more precise, it has to rationalize its own time and energy consumption. Namely, it should be reflexive. Eventually, the overall feasibility of this load balancer needs to be relevant. It is obviously apparent that empirical experiment is the best method to verify its feasibility. In contrast, as long as the study is not able to conduct such experiment, the following section touches upon the already existing and hypothetical data regarding the main components of the load balancer.
Experiment Results
Even though the motivation for the suggested design and implementation is quite sound, feasibility of creating this engine is the primary concern of this study. Therefore, this section describes and conducts processing of related data. To start with, it should be noted that time reduction and data alignments are the key factors of Xeon Phi’s thread speed. In fact, these metrics are significant because they are self-oriented and underpin the performance of Xeon Phi initially. As Intel Developer Zone admits, the speed for one hardware thread is 5.42 [3]. Then, a standard security rate with normal ratio of Load Balance Advisor is approximately 50, 5. It is pivotal to mention that TSL/SSL technology implements and boosts the functions of Load Balance Advisor. As SSL Pulse admits, TSL technology is a dominant one because 42% of users utilize it [4]. Thus, the security rate needs to add 42% from itself because of TSL involvement. Similarly, the power consumption of the suggested power-proportional web cluster equals to 69%. It is crucial to emphasize that the orientation at cloud-based technologies has to be measured by the simplicity rate of the load balancer. Microsoft TechNet suggests that the simplicity designates the overall speed of the balancer. Moreover, TechNet admits the 4-workload balance to be the most up-to-day and thus, the quickest [8]. Hence, in regard to the speed’s rate of single thread, it is necessary to multiply it by 4.
Further, the study immerses the data into hypothetical environment of network. The study takes the maximal variables so that the bandwidth is 256, and load and delay are 2200 (a so-called average maximum). As EIGRP theory of operation suggests, the standard feasibility metric of network with the highest variables is equal to 47019776. The designed formula for metric calculation is the following: metric = [K1 x bandwidth + (K2 x bandwidth) / (256 + load) + K3 x delay] x [K4 / (reliability + K3)], where K’s are the components’ rates and reliability is an implied rate of multicast flow timer, which EIGRP estimates as 211 [4]. All in all, the feasibility metric is the following: [5.42 x 256 + (50,5+(50,5/100 x 42) x 256) / (256+2200)+69 x 2200] x [(5.42 x 4)/ 211+ 69)]=88.45151849. Needless to say that is quite insufficient metric rate. Thus, the next section discusses this result in order to contextualize the data of the experiment.
Discussion
As the feasibility metric has occurred to be insufficient, several points regarding the suggested design and implementation of load balancing have emerged. First, the suggested design and implementation contain an excessive number of approaches. Thus, the engine is nearly impossible to develop. In the same way, there was no balance between four pillars of the suggested load balancer. Namely, the study has not considered the alignment of these requirements in accordance with a certain standard. Simply put, time reduction, economizing of power, interactive security, and Cloud computing orientation need to obtain a particular percentage of the entire project. Hence, the study has not prioritized these pillars. Generally speaking, the suggested design and implementation need a reasonable balance within. Correspondingly, the study has not taken into account a requirement for integration. In other words, the suggested engine is supposed to be developed on the basis of some platform. As long as the basic functions of this engine differ from each other, the platform has to integrate all these functions according to their prioritization and balance. What is more, it is desirable to create a platform, which can change the proportion of the functioning.
Second, these functions have occurred to be hard for mutual combination. Therefore, the integrative platform should have a high capacity. Certainly, it is not feasible, as well. As a consequence, it is necessary to choose two functions, which can merge together potentially. In addition, it is essential to note that they have to meet certain standards. Even more, they have to be developed independently. In case the integration fails again, the project will still have innovations, which load balancing can utilize independently. Eventually, the study has to justify the choice of two components for the integration.
The first component is power-proportional web cluster, which has been designed by Krioukov et al It economizes 69% of energy so that it is evident that it will cause a considerable optimization of the overall’s performance of a load balancer. Furthermore, the designers expect it to gain 90% of energy saving. Taking this point into consideration, it is to be said that this cluster is an obvious means of sustainable performance. Nowadays, the tendency of sustainable business has reached its peak so that such function will be quite reasonable and popular with many organizations. In such a way, this component serves global interests, as well. In addition, the component already has a prototype, which successfully runs the Wikipedia server. Also, it is available for empirical experiment, which the actual study lacks. Hence, this component requires minor improvements for being integrated for the suggested load balancer.
The second component is a cloud-oriented Cloud Sim toolkit. It is increasingly difficult to ignore the popularity of Cloud computing that is why the study assumes that it will become a leading trend within the sphere of computer technologies. Thus, it is necessary to support its growing development. Doubtless, Cloud Sim toolkit can become a bridge between standard load balancing and cloud-based environment. Moreover, Cloud computing technologies are quite complex and expensive. Therefore, it is important to simplify them. Consequently, Cloud Sim toolkit generally simplifies a cloud-based environment via extension of application provisioning algorithms with minimal effort. In such a way, the basic processes are boosted by additional capacity within a cloud-based environment. As a result, traffic transfers faster within a cloud network. Likewise, a prototype of this component is also available. Hence, as cloud-oriented technologies are the recent trend, a further development of Cloud Sim toolkit will possibly attract many investors and governmental organizations, which are particularly interested in the improvement of this technology.
Conclusion
It should be admitted that the study has come to the following suggestion. Actually, the study offers to create a load balancer, which includes two major innovations. The first one is power-proportional web cluster, the prototype of which has been already designed. The study has chosen this element because it economizes 69% of power, and further developments expect to increase this rate up to 90%. Thus, it is a sustainable innovation so that it will be popular with many organizations and receive a related support. By the same token, the prototype has proved its credibility by successful running the server of Wikipedia. The second component is a cloud-oriented toolkit Cloud Sim. The study has justified its development with the increasing popularity of cloud-based technologies. Thus, this project will be invested by the government and organizations, which are primarily interested in the development of this technology. Likewise, this toolkit is already designed.
You can ask us “write my descriptive essay” on this or any other topic at 123HelpMe.org. Don’t waste your time, order now!
Initially, this paper has conducted a meaningful research. In fact, it has defined the term of load balancing. Then, the study has gathered the recent and prospective innovations regarding the design and implementation of load balancing. As a consequence, this paper has justified the choice of its suggestions. In the same way, the study has described technical details and conducted the related experiment. The results of the experiment have been discussed, and the study has made the final decision, which has been described above.