Conference Paper

How to orchestrate a distributed OpenStack

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We see two important trends in ICT nowadays: the backend of online applications and services are moving to the cloud, and for delay-sensitive ones the cloud is being extended with fogs. The reason for these phenomena is most importantly economic, but there are other benefits too: fast service creation, flexible reconfigurability, and portability. The management and orchestration of these services are currently separated to at least two layers: virtual infrastructure managers (VIMs) and network controllers operate their own domains, it should consist of compute or network resources, while handling services with cross-domain deployment is done by an upper-level orchestrator. In this paper we show the slight modification of OpenStack, the mainstream VIM today, which enables it to manage a distributed cloud-fog infrastructure. While our solution alleviates the need for running OpenStack controllers in the lightweight edge, it takes into account network aspects that are extremely important in a resource setup with remote fogs. We propose and analyze an online resource orchestration algorithm, we describe the OpenStack-based implementation aspects and we also show large-scale simulation results on the performance of our algorithm.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... One can find papers in which one of these two goals are set out, e.g., energy in [84], [85], [102], processing time in [94], [104], [115], [148], [152], [157], [173], and there are also related works in which the goals are targeted jointly, e.g., in [81], [103], [112], [113], [117], [183]. While the former goal aims at preserving the limited battery capacity of terminals, e.g., IoT sensors, mobile phones, Papers [49], [54]- [57], [60], [63], [69], [74], [78], [79], [81], [82], [84]- [87], [90], [93], [94], [98], [101]- [104], [112], [113], [115], [117], [130], [143], [144], [148], [151], [152], [154], [157]- [160], [165], [166], [168], [173], [177], [181], [183], [184] [51], [59], [64], [68], [70], [71], [75], [77], [80], [95], [105], [108], [110], [114], [127], [129], [131]- [133], [135]- [137], [140], [155], [161], [169], [170], [174], [176], [178], [180], [182] [50], [52], [53], [58], [61], [62], [65]- [67], [72], [73], [76], [83], [88], [89], [91], [92], [96], [97], [99], [100], [106], [107], [109], [111], [116], [118]- [126], [128], [134], [138], [139], [142], [145]- [147], [149], [150], [153], [156], [162]- [164], [171], [172], [175], [179] the latter strives to reach a desired QoS level in terms of service response latency leveraging the compute capabilities of edge nodes in the proximity of the terminals. A few papers formulate general optimization goals as well, e.g., cost in [101], [148], revenue in [113], [158] where the authors propose to prioritize users with maximum utility to maximize the provider's revenue. ...
... Following our Platform components dimension, we distinguish the cloud-edge and the multi-edge scenarios depending on the placement options. The authors of [58], [61], [62], [65]- [67], [73], [76], [125], [126], [128], [162]- [164], [167], [175], [179] consider the cloud-edge scenario where the service components can be run either in the available edge domains or in the central cloud. Other research papers [83], [88], [89], [91], [92], [96], [97], [134], [138] investigate the multi-edge option where the central cloud cannot be used as a runtime environment. ...
... These extensions typically enable taking the network related aspects also into consideration, which is crucial in edge/fog computing systems. For example, the authors of [162]- [164], [167] (first architecture option) extend the widely used open source cloud management system, namely OpenStack, with network-awareness. More specifically, a novel online service placement solution is proposed that merges all the necessary functionalities for geographically distributed cloud-edge computing system under one common OpenStack domain. ...
Article
Full-text available
Edge computing is a (r)evolutionary extension of traditional cloud computing. It expands central cloud infrastructure with execution environments close to the users in terms of latency in order to enable a new generation of cloud applications. This paradigm shift has opened the door for telecommunications operators, mobile and fixed network vendors: they have joined the cloud ecosystem as essential stakeholders considerably influencing the future success of the technology. A key problem in edge computing is the optimal placement of computational units (virtual machines, containers, tasks or functions) of novel distributed applications. These components are deployed to a geographically distributed virtualized infrastructure and heterogeneous networking technologies are invoked to connect them while respecting quality requirements. The optimal hosting environment should be selected based on multiple criteria by novel scheduler algorithms which can cope with the new challenges of distributed cloud architecture where networking aspects cannot be ignored. The research community has dedicated significant efforts to this topic during recent years and a vast number of theoretical results have been published addressing different variants of the related mathematical problems. However, a comprehensive survey focusing on the technical and analytical aspects of the placement problem in various edge architectures is still missing. This survey provides a comprehensive summary and a structured taxonomy of the vast research on placement of computational entities in emerging edge infrastructures. Following the given taxonomy, the research papers are analyzed and categorized according to several dimensions, such as the capabilities of the underlying platforms, the structure of the supported services, the problem formulation, the applied mathematical methods, the objectives and constraints incorporated in the optimization problems, and the complexity of the proposed methods. We summarize the gained insights and important lessons learned, and finally, we reveal some important research gaps in the current literature.
... It provides a general extension to traditional VIMs by adding "network-awareness" to the resource orchestration process. The basic version of the algorithm was described in Haja et al. (2018). Here, we introduce the resource model including network topologies, the service model and summarize the main steps of our heuristics. ...
... This iteration number can be controlled by defining the value of max_try and max_vnf environment values. The computational complexity of the algorithm is polynomial (details in Haja et al. (2018)). ...
... Authors of Lucrezia et al. (2015) introduced a network-aware scheduler that aimed at optimizing the VM placement from a networking perspective: they used OpenDayLight (OpenDay-Light) to collect network topology information and to configure traffic steering with the goal of minimizing the bandwidth utilization of physical links. Haja et al. (2018) proposed a solution that alleviated the need for running OpenStack controllers in the lightweight edge, plus it took into account network aspects that are extremely important in a resource setup with remote fogs. In contrast to these solutions, DARK can take both the delay and bandwidth characteristics into consideration and in addition, it is able to migrate VNFs to achieve better utilization, which is typically not supported by available systems. ...
Article
Full-text available
Nowadays, online applications are moving to the cloud, and for delay-sensitive ones, the cloud is being extended with edge/fog domains. Emerging cloud platforms that tightly integrate compute and network resources enable novel services, such as versatile IoT (Internet of Things), augmented reality or Tactile Internet applications. Virtual infrastructure managers (VIMs), network controllers and upper-level orchestrators are in charge of managing these distributed resources. A key and challenging task of these orchestrators is to find the proper placement for software components of the services. As the basic variant of the related theoretical problem (Virtual Network Embedding) is known to be NP-hard, heuristic solutions and approximations can be addressed. In this paper, we propose two architecture options together with proof-of-concept prototypes and corresponding embedding algorithms, which enable the provisioning of delay-sensitive IoT applications. On the one hand, we extend the VIM itself with network-awareness, typically not available in today's VIMs. On the other hand, we propose a multi-layer orchestration system where an orchestrator is added on top of VIMs and network controllers to integrate different resource domains. We argue that the large-scale performance and feasibility of the proposals can only be evaluated with complete prototypes, including all relevant components. Therefore, we implemented fully-fledged solutions and conducted large-scale experiments to reveal the scalability characteristics of both approaches. We found that our VIM extension can be a valid option for single-provider setups encompassing even 100 edge domains (Points of Presence equipped with multiple servers) and serving a few hundreds of customers. Whereas, our multi-layer orchestration system showed better scaling characteristics in a wider range of scenarios at the cost of a more complex control plane including additional entities and novel APIs (Application Programming Interfaces).
... An online resource orchestration algorithm which takes into account network aspects is proposed in [7]. The algorithm enables the orchestrator of Open-Stack to manage a distributed cloud-fog infrastructure. ...
... A key difference between previous works and our solution is that none of [3,7,[20][21][22] deal with reliability, while our proposed solution achieves high reliability with the consideration of minimizing the resources provisioned for this cause. ...
Article
Full-text available
Novel applications will require extending traditional cloud computing infrastructure with compute resources deployed close to the end user. Edge and fog computing tightly integrated with carrier networks can fulfill this demand. The emphasis is on integration: the rigorous delay constraints, ensuring reliability on the distributed, remote compute nodes, and the sheer scale of the system altogether call for a powerful resource provisioning platform that offers the applications the best of the underlying infrastructure. We therefore propose Kubernetes-edge-scheduler that provides high reliability for applications in the edge, while provisioning less than 10% of resources for this purpose, and at the same time, it guarantees compliance with the latency requirements that end users expect. We present a novel topology clustering method that considers application latency requirements, and enables scheduling applications even on a worldwide scale of edge clusters. We demonstrate that in a potential use case, a distributed stream analytics application, our orchestration system can reduce the job completion time to 40% of the baseline provided by the default Kubernetes scheduler.
... However, this miopic approach may fail to provide a feasible system-level solution when multiple RT users compete for the scarce resources on the edge. Several recent works [39], [40] refer to "distributed placement" but in practice focus on clustering, namely, improving the scalability by partitioning resources into small sets of datacenters on which each concrete service can be deployed. The work [31] uses dynamic clustering of datacenters to handle multiple simultaneous independent placement requests. ...
Preprint
Full-text available
In an edge-cloud multi-tier network, datacenters provide services to mobile users, with each service having specific latency constraints and computational requirements. Deploying such a variety of services while matching their requirements with the available computing resources is challenging. In addition, time-critical services may have to be migrated as the users move, to keep fulfilling their latency constraints. Unlike previous work relying on an orchestrator with an always-updated global view of the available resources and the users' locations, this work envisions a distributed solution to the above problems. In particular, we propose a distributed asynchronous framework for service deployment in the edge-cloud that increases the system resilience by avoiding a single point of failure, as in the case of a central orchestrator. Our solution ensures cost-efficient feasible placement of services, while using negligible bandwidth. Our results, obtained through trace-driven, large-scale simulations, show that the proposed solution provides performance very close to those obtained by state-of-the-art centralized solutions, and at the cost of a small communication overhead.
... There are many published papers and documents around the topic of OpenStack and the various methods to deploy it. In addition to the papers discussed above, a couple more examples include the work completed in [37] regarding deploying OpenStack on a university campus and orchestrating a distributed OpenStack in [42]. In [37], Sheela and Choudhary explore the idea of providing a test bed for students to deploy applications. ...
Article
Full-text available
Purpose Major public cloud providers, such as AWS, Azure or Google, offer seamless experiences for infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). With the emergence of the public cloud's vast usage, administrators must be able to have a reliable method to provide the seamless experience that a public cloud offers on a smaller scale, such as a private cloud. When a smaller deployment or a private cloud is needed, OpenStack can meet the goals without increasing cost or sacrificing data control. Design/methodology/approach To demonstrate these enablement goals of resiliency and elasticity in IaaS and PaaS, the authors design a private distributed system cloud platform using OpenStack and its core services of Nova, Swift, Cinder, Neutron, Keystone, Horizon and Glance on a five-node deployment. Findings Through the demonstration of dynamically adding an IaaS node, pushing the deployment to its physical and logical limits, and eventually crashing the deployment, this paper shows how the PackStack utility facilitates the provisioning of an elastic and resilient OpenStack-based IaaS platform that can be used in production if the deployment is kept within designated boundaries. Originality/value The authors adopt the multinode-capable PackStack utility in favor of an all-in-one OpenStack build for a true demonstration of resiliency, elasticity and scalability in a small-scale IaaS. An all-in-one deployment is generally used for proof-of-concept deployments and is not easily scaled in production across multiple nodes. The authors demonstrate that combining PackStack with the multi-node design is suitable for smaller-scale production IaaS and PaaS deployments.
... To manage vulnerable servers and to generate legitimate or malicious traffic, virtual machines that are orchestrated using OpenStack [29] were used. OpenStack is an opensource set of tools that can be used to manage a cloud environment. ...
Article
Full-text available
Cybersecurity is an arms race, with both the security and the adversaries attempting to outsmart one another, coming up with new attacks, new ways to defend against those attacks, and again with new ways to circumvent those defences. This situation creates a constant need for novel, realistic cybersecurity datasets. This paper introduces the effects of using machine-learning-based intrusion detection methods in network traffic coming from a real-life architecture. The main contribution of this work is a dataset coming from a real-world, academic network. Real-life traffic was collected and, after performing a series of attacks, a dataset was assembled. The dataset contains 44 network features and an unbalanced distribution of classes. In this work, the capability of the dataset for formulating machine-learning-based models was experimentally evaluated. To investigate the stability of the obtained models, cross-validation was performed, and an array of detection metrics were reported. The gathered dataset is part of an effort to bring security against novel cyberthreats and was completed in the SIMARGL project.
... Peterson et al. [52] see edge and the democratization it offers as a cure for Internet ossification. Some argue for wide-spread in-network computation [57], blurring the borders of cloud and edge [32,75]. Our focus, in this paper, is on a general-purpose edge deployed by telcos/ISPs for a wide range of applications [47]. ...
Conference Paper
Full-text available
Edge computing has gained attention from both academia and industry by pursuing two significant challenges: 1) moving latency critical services closer to the users, 2) saving network bandwidth by aggregating large flows before sending them to the cloud. While the rationale appeared sound at its inception almost a decade ago, several current trends are impacting it. Clouds have spread geographically reducing end-user latency, mobile phones’ computing capabilities are improving, and network bandwidth at the core keeps increasing. In this paper, we scrutinize edge computing, examining its outlook and future in the context of these trends. We perform extensive client-to-cloud measurements using RIPE Atlas, and show that latency reduction as motivation for edge is not as persuasive as once believed; for most applications, the cloud is already “close enough” for the majority of the world’s population. This implies that edge computing may only be applicable for certain application niches, as opposed to a general-purpose solution
... In [11], the authors extended OpenStack to enable networkaware placement of virtual network functions (VNFs) in a multi-tier cloud deployment. Using VMTP [12], an opensource tool developed by Cisco for OpenStack, they collect information regarding the available network (bandwidth and latency) resources. ...
Conference Paper
Full-text available
Many modern day cloud services are composites of multiple smaller services working correctly together. This design has become increasingly prevalent due to the rise of the microservices application architecture, as well as service chaining in Network Function Virtualization (NFV). Future composite applications and services will be deployed on multi-tier clouds where their constituent microservices may be geographically spread over different regions. To optimize the delivery of such composites, the constituent microservices must be placed in locations where their clients, which may be other microservices, are able to meet certain QoS constraints. We propose an architecture and present a prototype system for incorporating network metrics into the auto-scaling and scheduling decisions of cloud management systems. Given a service with QoS constraints, our system monitors the network metrics (e.g. latency and bandwidth) of their clients. If a particular client is unable to receive the required latency or bandwidth of the service, our system auto-scales the service and strategically places the new instance(s) in a location capable of meeting the service quality, and re-directs traffic to the new instance.
... On the other hand, the VIM itself can be extended with network awareness and with the detailed view on network resources. With such an upgrade, the additional NFVO becomes unnecessary for single-provider setups where resources belong to the same operator and by these means, the orchestration and deployment time can be reduced significantly [18]. ...
Article
Full-text available
Industrial IoT has special communication requirements, including high reliability, low latency, flexibility, and security. These are instinctively provided by the 5G mobile technology, making it a successful candidate for supporting Industrial IoT (IIoT) scenarios. The aim of this paper is to identify current research challenges and solutions in relation to 5G-enabled Industrial IoT, based on the initial requirements and promises of both domains. The methodology of the paper follows the steps of surveying state-of-the art, comparing results to identify further challenges, and drawing conclusions as lessons learned for each research domain. These areas include IIoT applications and their requirements; mobile edge cloud; back-end performance tuning; network function virtualization; and security, blockchains for IIoT, Artificial Intelligence support for 5G, and private campus networks. Beside surveying the current challenges and solutions, the paper aims to provide meaningful comparisons for each of these areas (in relation to 5G-enabled IIoT) to draw conclusions on current research gaps.
... One is an effort by big cloud providers to establish computational facilities near clients/at the "edge", e.g., Amazon CloudFront [31] or Microsoft Azure Stack [53]. The alternative is building an Open Infrastructure for Edge (OIE) suggested by [37,42,46,58,70]. Such an infrastructure will comprise of common practices, technologies, and set of open standards that will enable any interested parties to offer their computational capacity for purposes of EC. ...
Conference Paper
Full-text available
High demand for low latency services and local data processing has given rise for edge computing. As opposed to cloud computing, in this new paradigm computational facilities are located close to the end-users and data producers, on the edge of the network, hence the name. The critical issue for the proliferation of edge computing is the availability of local computational resources. Major cloud providers are already addressing the problem by establishing facilities in the proximity of end-users. However, there is an alternative trend, namely, developing open infrastructure as a set of standards , technologies, and practices to enable any motivated parties to offer their computational capacity for the needs of edge computing. Open infrastructure can give an additional boost to this new promising paradigm and, moreover, help to avoid problems for which cloud computing has been long criticized for, such as vendor lock-in or privacy. In this paper, we discuss the challenges related to creating such an open infrastructure, in particular focusing on the applicability of distributed ledgers for contractual agreement and payment. Solving the challenge of contracting is central to realizing an open infrastructure for edge computing, and in this paper, we highlight the potential and shortcomings of distributed ledger technologies in the context of our use case.
... In [15] and [20], the authors emphasize the importance of building an open infrastructure for EC, concentrating on the OpenStack platform as a resource manager. In ExEC, IEPs can use OpenStack internally for the management of edge servers. ...
Conference Paper
Full-text available
Edge computing (EC) extends the centralized cloud computing paradigm by bringing computation into close proximity to the end-users, to the edge of the network, and is a key enabler for applications requiring low latency such as augmented reality or content delivery. To make EC pervasive, the following challenges must be tackled: how to satisfy the growing demand for edge computing facilities, how to discover the nearby edge servers, and how to securely access them? In this paper, we present ExEC, an open framework where edge providers can offer their capacity and be discovered by application providers and end-users. ExEC aims at the unification of interaction between edge and cloud providers so that cloud providers can utilize services of third-party edge providers, and any willing entity can easily become an edge provider. In ExEC, the unfolding of initially cloud-deployed application towards edge happens without administrative intervention, since ExEC discovers available edge providers on the fly and monitors incoming end-user traffic, determining the near-optimal placement of edge services. ExEC is a set of loosely coupled components and common practices, allowing for custom implementations needed to embrace the diverse needs of specific EC scenarios. ExEC leverages only existing protocols and requires no modifications to the deployed infrastructure. Using real-world topology data and experiments on cloud platforms, we demonstrate the feasibility of ExEC and present results on its expected performance.
Article
Mobile-edge computing provisions computing and storage resources by deploying edge servers (ESs) at the edge of the network to support ultralow delay and high bandwidth services. To ensure QoS of latency-sensitive services in vehicular networks, service migration is required to migrate data of the ongoing services to the closest ES seamlessly when users move across different ESs. To achieve seamless service migration, path selection is proposed to obtain one or more paths (consisting of several switches and ESs) to transfer service data. We focus on the following problems about path selection: 1) where to implement path selection? 2) how to coordinate interests of mobile users (i.e., vehicles) and network providers since they have conflicting interests during path selection? and 3) how to ensure seamless service migration during the migration of vehicles? To address the above problems, this article investigates path selection for seamless service migration. We propose a path-selection algorithm to jointly optimize both interests of the network plane (i.e., the cost for network providers) and service plane (i.e., QoE of users). We first formulate it as a multiobjective optimization problem and further prove theoretically that the proposed algorithm can give a weakly Pareto-optimal solution . Moreover, to improve the scalability of the proposed algorithm, a distance-based filter strategy is designed to eliminate undesired switches in advance. We conduct experiments on two synthesized data sets and the results validate the effectiveness of the proposed algorithm.
Conference Paper
The amount of data collected in various IT systems has grown exponentially in the recent years. So the challenge rises how we can process those huge datasets with the fulfillment of strict time criteria and of effective resource consumption, usually posed by the service consumers. This problem is not yet resolved with the appearance of edge computing as wide-area networking and all its well-known issues come into play and affect the performance of the applications scheduled in a hybrid edge-cloud infrastructure. In this paper, we present the steps we made towards network-aware big data task scheduling over such distributed systems. We propose different resource orchestration algorithms for two potential challenges we identify related to network resources of a geographically distributed topology: decreasing end-to-end latency and effectively allocating network bandwidth. The heuristic algorithms we propose provide better big data application performance compared to the default methods. We implement our solutions in our simulation environment and show the improved quality of big data applications.
Conference Paper
Edge and fog computing are emerging concepts extending traditional cloud computing by deploying compute resources closer to the users. This approach, closely integrated with carrier-networks, enables several future services, such as tactile internet, 5G and beyond telco services, and extended reality applications. The emphasis is on integration: the rigorous delay constraints, ensuring reliability on the distributed remote nodes, and the sheer scale altogether call for a powerful provisioning platform that offers the applications the best out of the underlying infrastructure. In this paper we investigate the resource provisioning problem in the edge infrastructure with the consideration of probable failures. Our goal is to support high reliability of services with the minimum amount of edge resources reserved to provide the necessary redundancy in the system. We design a resource provisioning algorithm, which takes into account network latency when pinpointing backup placeholders for virtual functions of edge applications. We implement the proposed solution in a simulation environment and show the efficient resource utilization results achieved by our fast heuristic algorithm.
Conference Paper
5G networks are expected to enable revolutionary services to be established, such as tactile Internet and online augmented reality applications. These services require a dynamically programmable back-haul network topology in order to serve large amount of network traffic and to guarantee near real-time response times. For high flexibility of service capabilities service functions will be deployed in virtualized environments instead of the currently used special purpose hardware. The most widely used Virtual Infrastructure Manager (VIM) is OpenStack, which is responsible for managing compute, storage and virtual network resources. As the current scheduler of OpenStack does not take into account the underlying physical network characteristics, in order to deploy network services (NS) in a geographically distributed infrastructure, multiple VIMs and an orchestrator on top of them, are necessary for resource management. In contrast to today's setups we show a novel solution that merges these functionalities under one common OpenStack domain: our solution is capable of i) measuring the bandwidth and delay characteristics of the underlying physical network among compute nodes, ii) creating a topology model that contains both compute-, and network-related features, iii) mapping the incoming service requests, and re-mapping already deployed services to the underlying resources with our novel orchestration algorithm, iv) deploying and migrating services via OpenStack API calls.
Conference Paper
Full-text available
The sharing economy has made great inroads with services like Uber or Airbnb enabling people to share their unused resources with those needing them. The computing world, however, despite its abundance of excess computational resources has remained largely unaffected by this trend, save for few examples like SETI@home. We present DeCloud, a decentralized market framework bringing the sharing economy to on-demand computing where the offering of pay-as-you-go services will not be limited to large companies, but ad hoc clouds can be spontaneously formed on the edge of the network. We design incentive compatible double auction mechanism targeted specifically for distributed ledger trust model instead of relying on third-party auctioneer. DeCloud incorporates innovative matching heuristic capable of coping with the level of heterogeneity inherent for large-scale open systems. Evaluating DeCloud on Google cluster-usage data, we demonstrate that the system has a near-optimal performance from an economic point of view, additionally enhanced by the flexibility of matching.
Conference Paper
Full-text available
The goal of the 5G Exchange project is to enable cross-domain orchestration of services over multiple administrations. The system we build allows the end-to-end integration of heterogeneous resource and service elements of a multi-vendor technology environment from multiple operators by sharing their network and compute infrastructures via NFV orchestration. We will run an industry control 5G use-case, where one of the VNFs is offered by a 3rd party solution provider as a VNFaaS. We will show i) full automation for end-to-end network service orchestration over multi-provider NFV and VNFaaS offerings with latency and high availability constraints; ii) actor-role models and business interactions and iii) how - with a feedback loop to lifecycle management - the system can adapt to changes.
Article
Full-text available
Infrastructure of the cloud computing is a complex system due to a huge number of various types of resources should be shared. Mananging and allocating resources in cloud is a major issue in cloud computing. Since the demands of resource allocation increases the number of issues is also getting increased. To pretaining the issues to manage and allocate the resources is a big deal in present cloud computing environment. Various earlier research works discussed about the issues and challenges for resource provisioning, job scheduling , load balancing, scalability, pricing and energy efficiency. It is necessary to provide a solution for resource management, resource allocation with better performance. In this paper, a user friendly framework is proposed where it enhances the scalability, resource management and resource allocation. To do this a request aware resource allocation and ranking based resource arrangement with scheduling is provided in the framework.
Conference Paper
Full-text available
This paper motivates and describes the introduction of network-aware scheduling capabilities in OpenStack, the open-source reference framework for creating public and private clouds. This feature represents the key for properly supporting the Network Function Virtualization paradigm, particularly when the physical infrastructure features servers distributed across a geographical region. This paper also describes the modifications required to the compute and network components, Nova and Neutron, and the integration of a network controller into the cloud infrastructure, which is in charge of feeding the network-aware scheduler with the actual network topology.
Conference Paper
Full-text available
OpenStack is a cloud computing platform. OpenStack provides an Infrastructure as a Service (IaaS). OpenStack constitutes resources such as compute, storage and network resources. Resource allocation in cloud environment deals with assigning available resources in cost effective manner. Compute resources are allocated in the form of virtual machines (aka instances). Storage resources are allocated in the form of virtual disks (aka volumes). Network resources are allocated in the form of virtual switches, routers and subnets for instance. Resource allocation in OpenStack is carried out by nova-scheduler. However, it is unable to support providers objectives such as allocation of resources based on user privileges, preference to underlying physical infrastructure, actual resource utilizations for example, CPU, memory, storage, network bandwidth etc. An improved nova-scheduler algorithm considers not only RAM, CPU but also vCPU utilization and network bandwidth. Improved nova-scheduler is referred as metrics-weight scheduler in this paper. This paper gives performance evaluation and analysis of Filter-scheduler and Metrics-weight scheduler.
Article
Fog computing is an architecture that extends the traditionally centralized functions of cloud computing to the edge and into close proximity to the things in an Internet of Things network. Fog computing brings many advantages, including enhanced performance, better efficiency, network bandwidth savings, improved security, and resiliency. This article discusses some of the more important architectural requirements for critical Internet of Things networks in the context of exemplary use cases, and how fog computing techniques can help fulfill them.
Conference Paper
Future network services and applications, such as coordinated remote driving or remote surgery, pose serious challenges on the underlying networks. In order to fulfill the extremely low latency requirement in combination with ultrahigh availability and reliability, we need novel approaches, for example to dynamically move network “capabilities” close to the users. This requires more flexibility, automation and adaptability to be added to the networks at different levels and operation planes. The key enabler of the novel features is network softwarization provided by NFV and SDN techniques. In this paper, we focus on a central component of the orchestration plane which is responsible for mapping the building blocks of services to available resources. Our main contribution is twofold. First, we propose a novel service graph embedding algorithm which is able to jointly control and optimize the usage of compute and network resources efficiently based on greedy heuristics. Besides, the algorithm can be configured extensively to obtain different optimization goals and trade-off running time with the search space. Second, we report on our implementation and integration with our proof-of-concept orchestration framework ESCAPE. Several experiments confirmed its practical applicability.
Article
Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.
Article
Next generation mobile networks not only envision enhancing the traditional MBB use case but also aim to meet the requirements of new use cases, such as the IoT. This article focuses on latency critical IoT applications and analyzes their requirements. We discuss the design challenges and propose solutions for the radio interface and network architecture to fulfill these requirements, which mainly benefit from flexibility and service-centric approaches. The article also discusses new business opportunities through IoT connectivity enabled by future networks.
Conference Paper
Cloud computing systems require a placement logic that decides where to allocate resources. In state-of-the-art platforms such as OpenStack, this scheduler takes into account multiple constraints when starting a new instance, including in particular the required computational and memory resources. However, this scheduling mechanism typically neither considers network requirements of Virtual Machines nor the networking resources that are actually available. In this paper we present an extension of the OpenStack scheduler that enables a network-aware placement of instances by taking into account bandwidth constraints to and from nodes. Our solution keeps track of host-local network resource allocation, and it can be combined with bandwidth enforcement mechanisms such as rate limiting. We present a prototype that requires only very few changes in the OpenStack open source software. Testbed measurement results demonstrate the benefit of our solution compared to the OpenStack default approach.
Article
Given a graph representing a substrate (or physical) network with node and edge capacities and a set of virtual networks with node capacity demands and node-to-node traffic demands, the Virtual Network Embedding problem (VNE) calls for an embedding of (a subset of) the virtual networks onto the substrate network which maximizes the total profit while respecting the physical node and edge capacities. In this work, we investigate the computational complexity of VNE. In particular, we present a polynomial-time reduction from the maximum stable set problem which implies strong -hardness for VNE even for very special subclasses of graphs and yields a strong inapproximability result for general graphs. We also consider the special cases obtained when fixing one of the dimensions of the problem to one. We show that VNE is still strongly -hard when a single virtual network request is present or when each virtual network request consists of a single virtual node and that it is weakly -hard for the case with a single physical node.
Article
Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as "Virtual Network Embedding (VNE)" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.
Efficient Service Graph Embedding: A Practical Approach
  • B Nemeth
Network-aware instance scheduling in openstack
  • M Scharf
Introducing network-aware scheduling capabilities in openstack
  • F Lucrezia
White Paper: Network Functions Virtualisation (NFV)
  • Etsi
ETSI, "White Paper: Network Functions Virtualisation (NFV)," 2013. [Online]. Available: http://portal.etsi.org/nfv/nfv_white_paper2.pdf