Three-tier data center network topology

Three-tier data center network topology

Source publication
Article
Full-text available
A data center is a facility for housing computational and storage systems interconnected through a communication network called data center network (DCN). Due to a tremendous growth in the computational power, storage capacity and the number of inter-connected servers, the DCN faces challenges concerning efficiency, reliability and scalability. Alt...

Similar publications

Preprint
Full-text available
Congestion control plays an essential role on the internet to manage overload, which affects data transmission performance. The random early detection (RED) algorithm belongs to active queue management (AQM), which is used to manage internet traffic. The RED is used to eliminate weakness in default control of the transport control protocol (TCP) dr...
Article
Full-text available
Congestion control mechanism is solely responsible for maintaining the performance of streaming data. However, when there is no congestion, a regular delivery window update is followed as a step by step process. The process can be improved by individual window update along with acknowledgement (ACK) as feedback to the server even in the absence of...
Article
Full-text available
Abstrak VoIP merupakan aplikasi real time yang kualitasnya sangat tergantung pada delay dan jitter, yang mana hal ini sulit dipenuhi oleh protokol yang bersifat reliable dan memiliki congestion control seperti TCP. Di sisi lain penggunaan UDP yang tidak memiliki congestion control menyebabkan peluang terjadinya congestion pada jaringan sangat besa...
Conference Paper
Full-text available
Day by day, the size of data centers is increasing. Due to the large-scale data centers, the complexity becomes a major factor in terms of their infrastructure. Therefore, the architecture of the large-scale data center networks (DCN) should be stable and optimized that provides many services to the applications. Since large DCN have many of flows...
Article
Full-text available
Network performance diagnostics is an important topic that has been studied since the Internet was invented. However, it remains a challenging task, while the network evolves and becomes more and more complicated over time. One of the main challenges is that all network components (e.g., senders, receivers, and relay nodes) make decision based only...

Citations

... The major advantage of switch-centric networks with the separation of communication and computation is that they are based on proven traffic forwarding and routing technologies available in commodity switches (e.g., Ethernet switches), such as IP broadcasting, link-state routing, and equal-cost multi-path forwarding [18]. Although a number of architectures in server-centric design have been proposed exploiting low-cost switches, the switch-centric based architectures are the mainstream scheme for the DCNs [19]. For instance, the multi-tier tree-like architectures continues to be the most widely deployed, and the fat-tree, leaf-spine and expander graph topology are the most promising architectures in terms of robustness, scalability and cost. ...
Article
Full-text available
Relying on the flexible-access interconnects to the scalable storage and compute resources, data centers deliver critical communications connectivity among numerous servers to support the housed applications and services. To provide the high-speeds and long-distance communications, the data centers have turned to fiber interconnections. With the stringently increased traffic volume, the data centers are then expected to further deploy the optical switches into the systems infrastructure to implement the full optical switching. This paper first summarizes the topologies and traffic characteristics in data centers and analyzes the reasons and importance of moving to optical switching. Recent techniques related to the optical switching, and main challenges limiting the practical deployments of optical switches in data centers are also summarized and reported.
... Therefore, a communication policy is needed for these microservices to interact with each other, whether through interfaces if they are located in the same PM or through the network if To handle traffic, a multi-path topology is proposed, in which the internet is used to connect zones within a single data centre they are distributed across multiple zones. According to [22], network traffic in the data centre is classified as: ...
Article
Full-text available
Containers have emerged recently as a cloud technology for improving and managing cloud resources. They improve resource sharing by allowing instances to run on top of the host’s operating system. Container-based virtualization runs and manages hosted instances via the host kernel. Resource sharing can cause resource contention. In addition, dependent jobs, which may be deployed across multiple hosts, require frequent communication, resulting in a high volume of network traffic and network contention. The majority of existing research focuses on load balancing, with no consideration for the fact that network contention also plays a significant role in container performance. In this research, we propose a Dependency-aware Scheduling algorithm (DAScheduler) that deploys jobs into containers while accounting for both load balancing and job dependencies. The experimental results show that DAScheduler reduces network traffic by more than half and balances the loads. In comparison to one of the existing state-of-the-art techniques, DAScheduler improves overall cloud performance.
... Architectures such as fat-tree [4], BCube [5], DCell [6], and leaf-spine [7] that are based on electrical switching with its bandwidth limitations have difficulty accommodating the rapidly increasing traffic. Furthermore, a multihop transmission structure across multiple switching layers in electrical switching networks will also cause high transmission latency [8,9]. ...
Article
Full-text available
The explosive growth of data center (DC) traffic imposes unprecedented challenges on the current electrical switch-based data center networks (DCNs) with the bottleneck of limited bandwidth and high latency. Benefitting from transparency to the data rate and format, optical switching with theoretically infinite bandwidth could overcome the bandwidth bottleneck of electrical switching DCNs. However, DCNs normally deploy multitenant applications with a variety of DC traffic. It is hard to reconfigure the optical interconnections in real-time to provide adaptable bandwidth to the traffic with heterogeneous characteristics. Moreover, to improve the bandwidth utilization, statistical multiplexing is generally deployed in optical DCNs to forward the traffic flow in a time slot, which requires the network time to be precisely synchronized to all network nodes. For fast optical switches with a nanosecond switching configuration time, overall end node times must be synchronized at subnanosecond magnitude. In this paper, we propose and experimentally investigate a reconfigurable and picosecond-synchronized optical DCN (ReSAW) based on an arrayed waveguide grating router (AWGR) and the White Rabbit (WR) protocol. A scheduler based on a distributed field-programmable gate array is implemented in the proposed ReSAW to realize flexible wavelength configuration by controlling the fast laser array based on semiconductor optical amplifiers (SOAs) according to time slot and traffic priority. Moreover, the WR protocol is implemented in optical DCNs for what we believe is the first time to synchronize the time of the distributed top of racks (ToRs). The experimental demonstration validates that ReSAW achieves an average end-to-end latency of 317.44 ns and a precise synchronized time with an average 386 ps skew. When the load is 0.4, the packet loss after ReSAW reconfiguration is less than ${1.83} \times {{10}^{- 6}}$ 1.83 × 10 − 6 , and the network latency is less than 1.73 µs. Based on the experimental parameters and results, the OMNeT++ simulation model is built to further verify the reconfigurability and scalability of the ReSAW network. Results show that the packet loss rate and latency performance increase 8.24% and 12.47%, respectively, at a load of 0.6 as the ReSAW network scales from 2560 to 40,960 servers compared to before the reconfiguration.
... Specifically, for an HOE-DCN that uses the k-ray fat-tree as the EPS part, we divide it into k 2 PoDs and each PoD includes k racks. Regarding the size of the HOE-DCN, we surveyed the commonly-used scales for fat-trees, and decide to architect the largest HOE-DCN in our simulations based on the 128-ray fat-tree [50]. ...
Article
Full-text available
Hybrid optical/electrical datacenter network (HOE-DCN) uses the inter-rack networks that consist of both electrical Ethernet switches and optical cross-connects (OXCs), for better cost-efficiency and scalability. Meanwhile, to provision dynamic network services well, the operator of an HOE-DCN needs to deploy virtual networks (VNTs) and remap them adaptively. Therefore, this work studies the problem of VNT remapping in an HOE-DCN from a novel perspective, i.e., the remapping schemes should be optimized for not only the network status after the remapping but also the transition to realize it. Specifically, we model this problem as a bilevel optimization, where the upper-level optimization aims at selecting proper virtual machines (VMs) to migrate such that the estimated latency of VM migration can be minimized, and the lower-level optimization determines the actual scheme of VNT remapping for minimizing the number of resource hot-spots. We first formulate a bilevel mixed integer linear programming (BMILP) model for the bilevel optimization, and then propose a polynomial time algorithm based on enumeration to solve it approximately. Extensive simulations verify the effectiveness of our proposal.
... The Three-Tier topology [15], [23], [26] is widely used. It has a multi-tiered architecture [4] and consists of one core, one aggregation, and one edge layer, as shown in Fig. 2. Advantages include simplicity of virtual network management [35], [44], shorter downtime, and path redundancy that is required for medium to large commercial DCNs. ...
... TIANFANG AND XUESONG to-end queuing delay as a congestion feedback to proactively reduce sending rate before packet losses appear. Data Center TCP (DCTCP) [7] leverages the explicit congestion notification (ECN) technology to adjust senders' congestion windows (CWND) and slow down sending rate before the queue becomes full. By counting up the number of ECN-marked ACK packets, DCTCP achieves fine-grained congestion control. ...
... In this section, we select multiple metrics to evaluate STCC through simulation experiments, and compare it with the solutions in Refs. [6,7,17]. To simulate a many-to-one scenario, we choose the Fattree topology as the experimental topology. ...
Article
Full-text available
Due to characteristics of high bandwidth and low latency, datacenter networks ensure tremendous data could be transmitted in an efficient way. However, in many‐to‐one transmission scenarios, high concurrency of TCP flows aggravates network congestion and causes overflows in switches, seriously impairing network performance. To solve the problem, a TCP congestion control mechanism based on software‐defined networking (STCC) is proposed. Without any modification in TCP stack, STCC monitors network performance through centralized control and global network view of SDN, employs a routing algorithm based on the minimum path bandwidth utilization rate to forward packets and uses different methods to adjust congestion windows of senders so that network congestion can be greatly mitigated. An experiment platform is built to carry out simulation tests for evaluating STCC, and the results show that under the same network conditions, STCC effectively reduces the number of retransmission timeout of senders and noticeably raises network throughput, compared with other congestion control algorithms.
... Youku). They may deal with a variety of services that demand heavy infrastructure which requires the service providers to procure, build and maintain large DC Networks [8]. Over the last decade, many new design architectures have been proposed for data center networks to improve the computational and storage power of the data centers, such as [9][10][11][12]. ...
Article
Full-text available
Today, data center networks (DCNs) are built using multi-tier architecture. These large-scale networks face many challenges, such as security, delay, low throughput, loops, link oversubscription, TCP Incast and Outcast, etc. In this paper, a TCAM (Ternary Content Addressable Memory) based routing technique is proposed, augmenting the routing capabilities of multi-tier architectures in large scale networks. The routing complexities in these architectures are rectified and improved by implementing an additional TCAM based routing table in Leaf/ToR switches for a specific number of compute nodes in particular Pods, and it is scalable to whole datacenter nodes. To test the model, we implemented two prototype models: one depicting our proposed TCAM based switch and the other is a typical Top-of-the-Rack (ToR) switch and compared the performance of the proposed model and if any overhead introduced in it. The preliminary results show that our TCAM based routing table technique is fast and it forwards the network packets at line-rate, does not introduce considerable latency, on-chip resources power consumptions is less than 3%, and helps to solve or mitigate the above critical problems that are present in the current large DCs three-tier architecture, especially in Top of the Rack and aggregation layers switches.
... Nevertheless, recent studies proposed a variety of network topology designs in which each approach features its unique network architecture, fault avoidance and recovery, and routing algorithms. We adopt the architecture classification of DCN presented in [3] to categorize DCNs into three main classes: (i) switch-centric architectures, for instance, Threetier [4], Fat-Tree [5], PortLand [6], and F 2 Tree [7]; (ii) servercentric architectures (also known as recursive topologies [8]) e.g, DCell [9], Ficonn [10], MCube [11], and (iii) hybrid/enhanced architectures, e.g., Helios [12]. In practice, four main network topologies are widely used to construct server networks in DCs including two switch-centric topologies (three-tier and fat-tree), and two server-centric topologies (BCube, DCell). ...
Article
Full-text available
Modeling a cloud computing center is crucial to evaluate and predict its inner connectivity reliability and availability. Many of previous studies on system availability/reliability assessment of virtualized systems consisting of singular servers in cloud data centers have been reported. In this paper, we propose a hierarchical modeling framework for reliability and availability evaluation of tree-based data center networks. The hierarchical model consists of three layers, including (i) reliability graphs in the top layer to model the system network topology, (ii) a fault-tree to model the architecture of the subsystems, and (iii) stochastic reward nets to capture the behaviors and dependency of the components in the subsystems in detail. Two representative data center networks based on three-tier and fat-tree topologies are modeled and analyzed in a comprehensive manner. We specifically consider a number of case-studies to investigate the impact of networking and management on cloud computing centers. Furthermore, we perform various detailed analyses with regard to reliability and availability measures for the system models. The analysis results show that appropriate networking to optimize the distribution of nodes within the data center networks can enhance the reliability/availability. The conclusion of this study can be used toward the practical management and construction of cloud computing centers.
... Data center network topology could be switch-centric, server-centric or hybrid (dual centric) with its specific energy consumption characteristics [4]. However, studies showed that energy utilized to process workloads in switch-centric topology is more profitable as switches by default are equipped with intelligent routing algorithms and connected to servers through a single port [9], making such networks very responsive. A very responsive variant of switch-centric DCN architecture will be useful as a potential solution to the increasing demands of cloud computing DCs and help eradicated challenges faced by legacy DCN architecture. ...
... However, end-of-row (EOR) aggregation-level switches with idle module racks can be powered down. This layer is equally utilized as much as the core; hence, packet losses are more at the aggregation layer than any other layers [9]. Most DCs run Fig. 1 Three-tier data center network topology around 30% of their computational capacity [18]; shutting down inactive aggregation servers with prior considerations for load fluctuations that could be managed by less idle servers had always been an energy-aware decision. ...
... The distribution of energy usage among the 64 servers for the FT is similar to that of 3T as shown in Fig. 11b. However, the commodity switches that replaced the energy hungry enterprise switches in 3T at the upper layers are larger in quantity and are actively involved in end-to-end aggregation of bandwidth to host servers [9,10,26,27,39], resulting an increased energy consumption of the network module in FT (see Fig. 11c, d). ...
Article
Full-text available
Data center network (DCN) is the core of cloud computing and accounts for 40% energy spend when compared to cooling system, power distribution and conversion of the whole data center (DC) facility. It is essential to reduce the energy consumption of DCN to ensure energy-efficient (green) data center can be achieved. An analysis of DC performance and efficiency emphasizing the effect of bandwidth provisioning and throughput on energy proportionality of two most common switch-centric DCN topologies: three-tier (3T) and fat tree (FT) based on the amount of actual energy that is turned into computing power are presented. Energy consumption of switch-centric DCNs by realistic simulations is analyzed using GreenCloud simulator. Power-related metrics were derived and adapted for the information technology equipment processes within the DCN. These metrics are acknowledged as subset of the major metrics of power usage effectiveness and data center infrastructure efficiency, known to DCs. This study suggests that although in overall FT consumes more energy, it spends less energy for transmission of a single bit of information, outperforming 3T.
... The reason is that the leaf/spine tree topology has no redundant path, while the fat-tree has a redundant data path. 37 Nonetheless, the aspects of the network and computing performances are similar between 2 scenarios. ...
Article
Software-defined networking (SDN) has been widely researched and used to manage large-scale networks such as data center networks (DCNs). An early stage of SDN controller experienced low responsiveness, low scalability, and low reliability. To solve these problems, distributed SDN controllers have been proposed. The concept of distributed SDN controllers distributes control messages among multiple SDN controllers. However, distributed SDN controllers must assign a master controller for each networking devices. Most previous studies, however, did not consider the characteristics of DCNs. Thus, they are not suitable to operate in DCNs. In this paper, we propose HeS-CoP, a heuristic switch-controller placement scheme for distributed SDN controllers in DCNs. With the control traffic load and CPU load, HeS-CoP decides when our scheme should be performed in DCNs. To show the feasibility of HeS-CoP, we designed and implemented an orchestrator that contains our proposed scheme and then evaluated our proposed scheme. As a result, our proposed scheme well distributes the control traffic load, decreases the average CPU load, and reduces the packet delay.