Fig 1 - uploaded by Shilpa Sonawani
Content may be subject to copyright.
Loosely coupled architecture of OpenStack  

Loosely coupled architecture of OpenStack  

Source publication
Conference Paper
Full-text available
OpenStack is a cloud computing platform. OpenStack provides an Infrastructure as a Service (IaaS). OpenStack constitutes resources such as compute, storage and network resources. Resource allocation in cloud environment deals with assigning available resources in cost effective manner. Compute resources are allocated in the form of virtual machines...

Similar publications

Preprint
Full-text available
Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much...
Conference Paper
Full-text available
In an Infrastructure As A Service (IaaS) cloud, the scheduler deploys VMs to servers according to service level objectives (SLOs). Clients and service providers must both trust the infrastructure. In particular they must be sure that the VM scheduler takes decisions that are consistent with its advertised behaviour. The difficulties to master every...
Article
Full-text available
Executing time critical applications within cloud environments while satisfying execution deadlines and response time requirements is challenging due to the difficulty of securing guaranteed performance from the underlying virtual infrastructure. Cost-effective solutions for hosting such applications in the Cloud require careful selection of cloud...

Citations

... Furthermore, the filtering step currently does not consider any networking related metrics, which might be a shortcoming in a MEC infrastructure. Various extensions to OpenStack were proposed (Scharf et al., 2015;Sahasrabudhe and Sonawani, 2015) for the support of network-aware placement of instances. These solutions take into account bandwidth constraints to and from nodes by keeping track of host-local network resource allocation. ...
Article
Full-text available
Nowadays, online applications are moving to the cloud, and for delay-sensitive ones, the cloud is being extended with edge/fog domains. Emerging cloud platforms that tightly integrate compute and network resources enable novel services, such as versatile IoT (Internet of Things), augmented reality or Tactile Internet applications. Virtual infrastructure managers (VIMs), network controllers and upper-level orchestrators are in charge of managing these distributed resources. A key and challenging task of these orchestrators is to find the proper placement for software components of the services. As the basic variant of the related theoretical problem (Virtual Network Embedding) is known to be NP-hard, heuristic solutions and approximations can be addressed. In this paper, we propose two architecture options together with proof-of-concept prototypes and corresponding embedding algorithms, which enable the provisioning of delay-sensitive IoT applications. On the one hand, we extend the VIM itself with network-awareness, typically not available in today's VIMs. On the other hand, we propose a multi-layer orchestration system where an orchestrator is added on top of VIMs and network controllers to integrate different resource domains. We argue that the large-scale performance and feasibility of the proposals can only be evaluated with complete prototypes, including all relevant components. Therefore, we implemented fully-fledged solutions and conducted large-scale experiments to reveal the scalability characteristics of both approaches. We found that our VIM extension can be a valid option for single-provider setups encompassing even 100 edge domains (Points of Presence equipped with multiple servers) and serving a few hundreds of customers. Whereas, our multi-layer orchestration system showed better scaling characteristics in a wider range of scenarios at the cost of a more complex control plane including additional entities and novel APIs (Application Programming Interfaces).
... First, the filtering algorithm filters out nodes with sufficient resources according to the user's resource request specifications, and then, the weighing algorithm is used to calculate the optimal node. There have been many studies, such as [32] and [42], on improving the scheduling model; however, Nova manages virtual machines by default instead of containers. Due to the difference in resource isolation between the two and the lack of support of Nova-docker for the precise limitation of CPU and disk resources, the effective scheduling of containers cannot be achieved. ...
Article
Full-text available
As an emerging technology in cloud computing Docker is becoming increasingly popular due to its high speed high efficiency and portability. The integration of Docker with OpenStack has been a hot topic in research and industrial areas e.g. as an emulation platform for evaluating cyberspace security technologies. This paper introduces a high-performance Docker integration scheme based on OpenStack that implements a container management service called Yun. Yun interacts with OpenStack’s services and manages the lifecycle of the container through the Docker Engine to integrate OpenStack and Docker. Yun improves the container deployment and throughput as well as the system performance by optimizing the message transmission architecture between internal components the underlying network data transmission architecture between containers and the scheduling methods. Based on the Docker Engine API Yun provides users with interfaces for CPU memory and disk resource limits to satisfy precise resource limits. Regarding scheduling Yun introduces a new NUMA-aware and resource-utilization-aware scheduling model to improve the performance of containers under resource competition and to balance the load of computing resources. Simultaneously Yun decouples from OpenStack versions by isolating its own running environment from the running environment of OpenStack to achieve better compatibility. Experiments show that compared to traditional methods Yun not only achieves the integration of OpenStack and Docker but also exhibits high performance in terms of deployment efficiency container throughput and the container’s system while also achieving load balancing.
... Litvinski [26] explored OpenStack compute scheduler by the principles of Design of Experiment. Sahasrabudhe [27] analyzed filter scheduler and metrics-weight scheduler in OpenStack and then performed the performance evaluation. ...
Article
Full-text available
Virtual machine placement has great potential to significantly improve the efficiency of resource utilization in a cloud center. Focusing on CPU and memory resource, this paper presents SOWO—a discrete particle swarm optimization-based workload optimization approach to minimize the number of active physical machines in virtual machine placement. The experiment results show the usability and superiority of SOWO. Compared with the OpenStack native scheduler, SOWO decreases the physical machine consumption by at least 50% and increases the memory utilization of physical machine by more than two times.
... In [11] an extension is presented that enables a network-aware placement of instances by taking into account bandwidth constraints to and from nodes by keeping track of host-local network resource allocation. In [12] a new filtering step is proposed, that takes into account the actual load (CPU, network I/O, RAM) of the physical node. Authors of [13] discuss the extensions required to introduce a network-aware scheduler: the solution aims to optimize the VM placement from a networking perspective, which is essential for the efficient deployment of VNF service graphs. ...
Conference Paper
We see two important trends in ICT nowadays: the backend of online applications and services are moving to the cloud, and for delay-sensitive ones the cloud is being extended with fogs. The reason for these phenomena is most importantly economic, but there are other benefits too: fast service creation, flexible reconfigurability, and portability. The management and orchestration of these services are currently separated to at least two layers: virtual infrastructure managers (VIMs) and network controllers operate their own domains, it should consist of compute or network resources, while handling services with cross-domain deployment is done by an upper-level orchestrator. In this paper we show the slight modification of OpenStack, the mainstream VIM today, which enables it to manage a distributed cloud-fog infrastructure. While our solution alleviates the need for running OpenStack controllers in the lightweight edge, it takes into account network aspects that are extremely important in a resource setup with remote fogs. We propose and analyze an online resource orchestration algorithm, we describe the OpenStack-based implementation aspects and we also show large-scale simulation results on the performance of our algorithm.
Chapter
Docker is a mature containerization technique used to perform operating system level virtualization. One open issue in the cloud environment is how to properly choose a virtual machine (VM) to initialize its instance, i.e., container, which is similar to the conventional problem of VM placement towards physical machines (PMs). Current studies mainly focus on container placement and VM placement independently, but rarely take into consideration of the two placements’ systematic collaboration. However, we view it as a main reason for scattered distribution of containers in a data center, which finally results in worse physical resource utilization. In this paper, we propose a definition named “Container-VM-PM” architecture and propose a novel container placement strategy by simultaneously taking into account the three involved entities. Furthermore, we model a fitness function for the selection of VM and PM. Simulation experiments show that our method is superior to the existing strategy with regarding to the physical resource utilization.