Figure 1 - available via license: Creative Commons Attribution 2.0 Generic
Content may be subject to copyright.
OpenStack architecture.

OpenStack architecture.

Source publication
Article
Full-text available
In this paper, we describe the development of template management technology to build virtual resources environments on OpenStack. In recent days, Cloud computing has been progressed and also open source Cloud software has become widespread. Authors are developing cloud services using OpenStack. There are technologies which deploy a set of virtual...

Similar publications

Conference Paper
Full-text available
Edge computing has become a recent approach to bring computing resources closer to the end-user. While offline processing and aggregate data reside in the cloud, edge computing is promoted for latency-critical and bandwidth-hungry tasks. In this direction, it is crucial to quantify the expected latency reduction when edge servers are preferred over...
Article
Full-text available
The cloud was originally designed to provide general-purpose computing using commodity hardware and its focus was on increasing resource consolidation as a means to lower cost. Hence, it was not particularly adapted to the requirements of multimedia applications that are highly latency sensitive and require specialized hardware, such as graphical p...
Article
Full-text available
Cloud-based content delivery networks (CCDNs) have been developed as the next generation of content delivery networks (CDNs). In CCDNs, the cloud contributes to the cost-effective, pay-as-you-go model, and virtualization and the traditional CDNs contribute to content replications. Delivering infrastructure as a service in a networked cloud computin...
Conference Paper
Full-text available
This paper investigates the feasibility of offloading resource-intensive computational tasks from a vehicle to a group of neighboring vehicles over vehicle-to-vehicle (V2V) networks to complement traditional cloud / edge computing infrastructure. Although the recent research work has investigated the potential of such a virtual edge server consisti...

Citations

... Other types of open source IaaS software in addition to OpenStack [3] are OpenNebula [18], Ecalyptus [19], and CloudStack [5]. OpenNebula is a virtual infrastructure [14] for template extraction. ...
Preprint
Full-text available
We propose here a technique for automatic verification of software patches for user virtual environments on Infrastructure as a Service (IaaS) Cloud to reduce the cost of verifying patches. IaaS services have been spreading rapidly, and many users can customize virtual machines on IaaS Cloud like their own private servers. However, users must install and verify software patches of the OS or middleware installed on virtual machines by themselves. This task increases the user’s operation costs. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from a test case database (DB), distributes patches to virtual machines in the replicated environments, and executes the test cases automatically on the replicated environments. To reduce test cases creation efforts, we propose an idea of two-tier abstraction which groups software to software groups and function groups and selects test cases belonging to each group. We applied the proposed method on OpenStack using Jenkins and confirmed its feasibility. We evaluated the effectiveness of test case creation efforts and the automatic verification performance of environment replications, test cases extractions, and test case executions. Y. Yamato, "Automatic Verification Technology of Software Patches for User Virtual Environments on IaaS Cloud," Journal of Cloud Computing, Springer, Vol.4, No.4, DOI: 10.1186/s13677-015-0028-6, Feb. 2015.
... These platforms use online global labor markets to outsource their activities, while this outsourcing does not include their key competencies therefore, it would not be considered a new development for the businesses. Amazon was one of the pioneers in the process, providing a template for coordinating purchasing, while streamlining related activities (Yamato et al., 2014), such as translation, the wording of smaller documents, and customer service (Pavlick et al., 2014). Encouraged by the model's popularity, the same service was made available to other businesses, making Amazon Mechanical Turk one of the largest online labor markets in the world. ...
... OpenStack [16][17][18] is a cloud based computing software platform. The users deploy it mainly as an IaaS solution. ...
Article
Full-text available
In recent years, virtualization is one of the key technologies of next-generation data centers. However, the problem of virtualization technology is that each instance needs to run a client operating system and a lot of applications. Therefore, it might generate a heavy load and affect the system efficiency and performance. In this work, the performance evaluation of three environments (bare-metal, Docker containers, and virtual machines) is investigated to understand the differences between the characteristics of each environment. Also, we addressed whether container-based virtualization can solve the problems of traditional virtualization. In addition, we combined Docker with OpenStack to implement a container management platform. Finally, we took Hadoop deployment as an example to verify whether Docker can solve the deployment problem and save time.
... A lot of technologies and frameworks exist which can be used to establish public or private clouds. Key cloud platforms as per their market share/ popularity [37] [44]. • Eucalyptus is an open source software for building AWS-compatible private and hybrid clouds, started as a research project at the UCSB. ...
... Thus [11] it presents a challenge for the cryptologists to design and provide a general purpose encryption algorithm that satisfies the public key encryption standards. So, after Diffie-Hellman [12], RSA public key cryptosystem came. After RSA, The ElGamal solved the Diffie-Hellman key exchange algorithm by presenting a random exponent type k. ...
Article
Full-text available
Even though open stack boosts business agility, availability, and efficiency by providing a platform with on-demand, resource pooling, self-service, highly elastic, and measured services capabilities, it needs improvisation in the following blocks of open stack such as Neutron which provides the networking capability for Open Stack and Cinder Block acts as a storage component; due to centralized administration in the open stack cloud environment. The problem with these components is that more susceptible to external attacks, unpredictable responses depending on the network load and lack of maintaining the integrity since cinder block shares simultaneous access to the same data. In order to meet the above requirements this work has proposed an AutoSec SDN-XTR (Automated end to end Security in Software Defined Networks - Efficient and Compact Subgroup Trace Representation). In order to rectify the security challenges additionally an efficient security algorithm XTR is proposed for providing the encryption of the file content that also involves a trace operation to incorporate integrity checking. This provides efficient security by involving the Diffie-Hellman for key agreement (both the public key and private key) and ElGamal approach for encryption. Then after the networking process storage of the files content occur in the cinder block store environment. In the cinder store erasure codes algorithm is utilized for data recovery where less storage will be achieved since replicas are not utilized and duplication of file content will not be done instead only parity data will be created as in the concept of RAID (Redundant Array of Independent Disk). Now the unique data which are recovered in cinder block are already been secured by XTR encryption and should be effectively distributed.
... Thus [11] it presents a challenge for the cryptologists to design and provide a general purpose encryption algorithm that satisfies the public key encryption standards. So, after Diffie-Hellman [12], RSA public key cryptosystem came. After RSA, The ElGamal solved the Diffie-Hellman key exchange algorithm by presenting a random exponent type k. ...
Article
Full-text available
Even though open stack boosts business agility, availability, and efficiency by providing a platform with on-demand, resource pooling, self-service, highly elastic, and measured services capabilities, it needs improvisation in the following blocks of open stack such as Neutron which provides the networking capability for Open Stack and Cinder Block acts as a storage component; due to centralized administration in the open stack cloud environment. The problem with these components is that more susceptible to external attacks, unpredictable responses depending on the network load and lack of maintaining the integrity since cinder block shares simultaneous access to the same data. In order to meet the above requirements this work has proposed an AutoSec SDN-XTR (Automated end to end Security in Software Defined Networks-Efficient and Compact Subgroup Trace Representation). In order to rectify the security challenges additionally an efficient security algorithm XTR is proposed for providing the encryption of the file content that also involves a trace operation to incorporate integrity checking. This provides efficient security by involving the Diffie-Hellman for key agreement (both the public key and private key) and ElGamal approach for encryption. Then after the networking process storage of the files content occur in the cinder block store environment. In the cinder store erasure codes algorithm is utilized for data recovery where less storage will be achieved since replicas are not utilized and duplication of file content will not be done instead only parity data will be created as in the concept of RAID (Redundant Array of Independent Disk). Now the unique data which are recovered in cinder block are already been secured by XTR encryption and should be effectively distributed.
... Performance requirements are server throughput or latency requirements. Note that if a user would like to replicate an existing virtual environment, we can use an technology of [19] to extract a template of the existing environment. ...
Article
Full-text available
In this paper, we propose a server structure proposal and automatic performance verification technology which proposes and verifies an appropriate server structure on Infrastructure as a Service (IaaS) cloud with bare metal servers, container based virtual servers and virtual machines. Recently, cloud services have been progressed and providers provide not only virtual machines but also new metal servers and container based virtual servers. However, users need to design an appropriate server structure for their requirements based on 3 types quantitative performances and users need much technical knowledge to optimize their system performances. Therefore, we study a technology which satisfies users' performance requirements on these 3 types IaaS cloud. Firstly, we measure performances of a bare metal server, Docker containers, KVM (Kernel based Virtual Machine) virtual machines on OpenStack with virtual server number changing. Secondly, we propose a server structure proposal technology based on the measured quantitative data. A server structure proposal technology receives an abstract template of OpenStack Heat and function / performance requirements and then creates a concrete template with server specification information. Thirdly, we propose an automatic performance verification technology which executes necessary performance tests automatically on provisioned user environments according to the template.
... Especially during the application deployment phase, automation is essential in order to guarantee that different software components such as the cloud's software stack, the Virtual Machines (VMs), external services, etc., will cooperate in a synchronous manner so as to successfully deploy a given application to the cloud. This challenging task has been both an active research field [4][5][6][7][8][9][10][11] and the objective of many production systems, operated by modern cloud providers [12][13][14][15]. These approaches differ in various aspects: Some of them specialize to specific applications (e.g., Openstack Sahara [13] focuses on deploying data processing systems to Openstack) whereas others [7,16] support an application description language and allow the user to define the application structure. ...
... Note that, although this is an interesting formulation, the hypothesis that each script is accompanied by another script that executes undo actions is rather strong and, in many cases, impossible. Yamato et al in [10] diagnosed some insufficiencies of the state of the art Heat and CloudFormation deployment tools and proposed a methodology through which Heat Templates can be shared among users, extracted from existing deployments and trigger resource updates. The authors of this work also diagnosed the problem of partial deployments due to transient errors and describe a rollback mechanism in order to delete resource generated due to failed deployments. ...
Article
Full-text available
Application deployment is a crucial operation for modern cloud providers. The ability to dynamically allocate resources and deploy a new application instance based on a user-provided description in a fully automated manner is of great importance for the cloud users as it facilitates the generation of fully reproducible application environments with minimum effort. However, most modern deployment solutions do not consider the error-prone nature of the cloud: Network glitches, bad synchronization between different services and other software or infrastructure related failures with transient characteristics are frequently encountered. Even if these failures may be tolerable during an application’s lifetime, during the deployment phase they can cause severe errors and lead it to failure. In order to tackle this challenge, in this work we propose AURA, an open source system that enables cloud application deployment with transient failure recovery capabilities. AURA formulates the application deployment as a Directed Acyclic Graph. Whenever a transient failure occurs, it traverses the graph, identifies the parts of it that failed and re-executes the respective scripts, based on the fact that when the transient failure disappears the script execution will succeed. Moreover, in order to guarantee that each script execution is idempotent, AURA adopts a lightweight filesystem snapshot mechanism that aims at canceling the side effects of the failed scripts. Our thorough evaluation indicated that AURA is capable of deploying diverse real-world applications to environments exhibiting high error probabilities, introducing a minimal time overhead, proportional to the failure probability of the deployment scripts.
... Then, the IaaS controller creates compute resources. Note that if users would like to create not only one compute server but several resources, such as virtual routers, the server selection function sends templates that describe the user environment structures by JavaScript Object Notation (JSON) and provisions them by OpenStack Heat [20] or other orchestration technology [41]. Fig. 4 Processing steps of server configuration and reconfiguration 1. ...
Article
Full-text available
We propose a server selection, configuration, reconfiguration and automatic performance verification technology to meet user functional and performance requirements on various types of cloud compute servers. Various servers mean there are not only virtual machines on normal CPU servers but also container or baremetal servers on strong graphic processing unit (GPU) servers or field programmable gate arrays (FPGAs) with a configuration that accelerates specified computation. Early cloud systems are composed of many PC-like servers, and virtual machines on these severs use distributed processing technology to achieve high computational performance. However, recent cloud systems change to make the best use of advances in hardware power. It is well known that baremetal and container performances are better than virtual machines performances. And dedicated processing servers, such as strong GPU servers for graphics processing, and FPGA servers for specified computation, have increased. Our objective for this study was to enable cloud providers to provision compute resources on appropriate hardware based on user requirements, so that users can benefit from high performance of their applications easily. Our proposed technology select appropriate servers for user compute resources from various types of hardware, such as GPUs and FPGAs, or set appropriate configurations or reconfigurations of FPGAs to use hardware power. Furthermore, our technology automatically verifies the performances of provisioned systems. We measured provisioning and automatic performance verification times to show the effectiveness of our technology.
... Cloud computing providers such as Microsoft Azure, Google Cloud Platform, Amazon Web Services, and others have developed templating systems that allow users to describe a set of cloud infrastructure components in a declarative manner. These templates can be used to cre- ate a virtualized compute system in the cloud using a language such as JSON /or YAML, both of which are human-readable data formats [11]. Templates allow developers to manage infra- structure such as web servers, data storage, and fully configured networks and firewalls as code. ...
Article
Full-text available
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.