Conference PaperPDF Available

Effect of latency on network and end user domains in Cloud Computing

Authors:

Abstract and Figures

Cloud Computing is an on-demand access to a shared pool of resources. Vendor's of cloud computing provide application and enable technology, infrastructure, hardware, software, and integration for client. The flip side of the cloud computing is availability and performance of their internet connection [8]. If excessive network latency is causing the application to spend a large amount of time waiting for response from a data located at distant center, then the bandwidth utilization will not be properly done and performance will suffer. In our research paper, we have presented a practical approach to calculate the network latency which hampers the cloud performance and also in another section revealed the effect of DDoS attack on cloud computing with the simulated results.
Content may be subject to copyright.
Effect of Latency on Network and End User
Domains in Cloud Computing
Malvinder Singh Bali (Research Scholar)
Dept of Computer Science and engineering
CT College of Engineering and Technology, Shahpur
Jalandhar, India
mbali4964@gmail.com
Shivani Khurana (Assistant Professor)
Dept of Computer Science and engineering
CT College of Engineering and Technology, Shahpur
Jalandhar, India
shivani.khurana27@gmail.com
Abstract Cloud Computing is an on-demand access to a shared
pool of resources. Vendor’s of cloud computing provide
application and enable technology, infrastructure, hardware,
software, and integration for client. The flip side of the cloud
computing is availability and performance of their internet
connection [8].If excessive network latency is causing the
application to spend a large amount of time waiting for response
from a data located at distant center, then the bandwidth
utilization will not be properly done and performance will suffer.
In our research paper, we have presented a practical approach to
calculate the network latency which hampers the cloud
performance and also in another section revealed the effect of
DDoS attack on cloud computing with the simulated results.
Keywords Cloud Computing, Latency, Botnet, Performance,
Availability
I. CLOUD COMPUTING
Outsourcing of data center functionality and availability of
desktop application online via network connection is what
we term cloud computing [10]. Companies are moving to
cloud computing to cut down the I.T cost having the security
with less I.T staff. The expense has been cut down & traffic
of network has increased to double times. The big concern
for organizations depends on the availability of data,
quality of the network and their performance.
Primary motive behind more organization moving to cloud is
the reduction in cost and dynamic resource allocation. Since
the infrastructure is hosted by the cloud providers, software
enterprises do not need to worry about their maintenance.
Also, characteristics like Scalability, elasticity, Multi-
tenancy, pay-per use make cloud computing the most wanted
technology today.
A. How to use the Cloud
Simply log on to the sites that offer cloud facilities. To access
the cloud service, simply sign up and pay online if it is not
free,
DropBox.com, Zoho.com, Google docs.com are cloud sites
B. Services Offered
Email
Whether user uses Gmail, Yahoo mail or
Outlook.com, the email is stored on the cloud.
The front end website that you visit connects
you to the cloud where the data is stored
which is backend. Your email is send to the
nearest server.
File
Storage
Earlier we used to stored files on Floppy disks
and then went on to CD’s. Now storage is too
easy and cheap as it is on the web.Box.net or
Drop box stores files and helps you switch
easily between your laptop storage and “cloud
storage
Sound
/Video
When you watch a video on YouTube you
have “cloud servers” at the back-end
streaming your favorite video. Sites such as
songspk.com and Mp3.com help us to listen
music for free in a similar manner.
Social
Media
Social media sites such as Facebook ,Twitter
or Google+ would have remained small and
localized affair if we did not have cloud
computing.
With strong networks, maximum routers and
high speed computers behind the scene and data
movement at the speed of light, social media
networks have become great example of cloud
computing and storage.
C Advantages
1. As long as you are connected to the internet, you can
access the cloud service and get to store data at very
cheap cost. Various sites do offer basic facilities free of
cost.
Basic facilities on paid sites cost around $10(500 rupee
a month)
2. Storing information in the cloud gives you almost
unlimited storage space. Hence; you no more need to
worry about running out of storage space or increasing
your current storage space availability.
3. Cloud computing gives you the advantage of
deploying application at a quick time, So as to make
user’s entire system fully functional in a matter of few
seconds.
4. Cloud service providers are usually competent
enough to handle recovery of information.Hence; this
makes the recovery and backup process much simpler
than other traditional methods of data storage.
D. Types of Cloud
1. Public Cloud- A public cloud is based on the
standard cloud computing model, where services like
infrastructure, application and storage are available to
the general public over the internet. Public cloud
services can be free or offered on pay-per usage.[11]
Example - Amazon Elastic Compute Cloud (EC2), Sun
cloud, Google AppEngine, and Window Azure
services.
2. Private Cloud- It is also called as Internal Cloud or
Corporate Cloud is a cloud set up within a corporate
organization that provides hosted services to a limited
number of people behind a firewall.[12]
Examples-Amazon Web services, VM Ware and Sales
force.com.
3. Hybrid Cloud- It is a cloud computing environment
in which an organization provides and manages some
resources in house and has others provided
externally.[13]
Example-IBM, Hewlett-Packard and EMC.
II. RELATED WORK/LITERATURE SURVEY
Adam Wolfe and Paul Lu [3] proposed a memcached named
Nahanni Memcached which can reduce the communication
overhead between Virtual machines(VM) located in same
server and used it with VDE networking to improve the total
read latency for a workload by up to 45%(i.e. read latest
workload) compared to standard memcached.
Ajith Singh and Hemalatha [1] conducted a survey on how
latency occurs in different geographical location and also
revealed an analysis work of how different browsers provide
different latency. A test conducted to show effect of
bandwidth reveals that when one tries to access cloud based
Google docs in cybercafé or GPRS connection it took 20 sec
while when tried to open at the campus of university which
provides 5.4 mbps it opens in 2 sec. The problem of latency in
the cloud network will be solved with the faster adaption of 3g
and 4g in the coming years.
Mohammad Haideri [2] tried to highlight the modelling and
simulation for different kinds of computer network attacks and
their impact on computer and networks. He explained
applications for modelling and simulation of computer
network security. It presents a comprehensive suggestion to
solve the problem in modelling and simulating in the field of
Information Security. He simulated the cloud network and
implemented the botnet attack on one of the cloud applications
i.e. FTP to analyze the effect of the attack on FTP server
Ankush Veer Reddy [4] proposed a security model for cloud
based applications by implementing a firewall using two
applications i.e. web based application and database
application to simulate and test the efficiency of the model.
Pardeep Sharma, Sandeep Sood and Sumeet Kaur [8] had
proposed the benefits of cloud computing along with its flip
side. This Paper also introduces various issues in Cloud
Computing and suggested the possible measures to overcome
them and the proposed algorithm is used to calculate and
compare the net revenue by using the cloud and data center
.
Sonia and Satinderpal Singh in [9] reviewed academic
research published in the field of energy efficient cloud
environment and aimed to provide an overview of analyzing
the energy consumption in different types of networks with
downloading/uploading speed and computing the performance
of networks.
Raihana Abdullah, Mohd Faizal Abdullah,Zul Azri
Muhamad,Mohd Zakri Mas Ud,Siti Rahayu Selamat and
Robiah Yusuf in [6] had addressed the current trend of Botnet
detection techniques and identifies the significant criteria in
each technique. Several existing techniques are analyzed
from various researchers and the capability criteria of botnet
detection techniques are analyzed. The techniques have been
shown on the selected detection criteria.
Ashraf Zia and Muhammad Naeem Ahmad Khan in [7] had
discussed performance issues in cloud computing. A number
of schemes pertaining to QoS issues are critically analyzed to
point out their strengths and weaknesses. Some of the
performances parameters at the three basic layers of the
cloud.IaaS, PaaS and SaaS are also discussed in this paper.
This paper also observed the key challenging areas that how
resources are allocated to clients and what are the roles of
cloud providers. Also investigated how the performance can
be increased by improving various components in a scalable
way with low cost, bitter performances and QoS. Some
technical and functional issues in cloud that affect the
performance of a cloud are also pointed out.
Nagaraju Kilari and Dr R.Sridaran in [5] had proposed
various security threats in a classified model and illustrated
how cloud and virtualization vulnerabilities affect the
different cloud models. The classification of various security
threats presented in this paper would definitely benefit the
cloud users to make out proper choice and cloud service
providers to handle such threats efficiently. As more cloud
based applications keep evolving the associated security
threats are also growing. Many researchers work on cloud
security exist in partial forms of either specifically on cloud
issues or Virtualization-related security issues
III. DOMAINS OF LATENCY FROM CLOUD TO END USER
Latency can occur in cloud area, in networks connecting
cloud to the end user and at the user end .Latency can be
measured by applying the formula abbreviations and Acronyms
A. Intra Cloud Latency
In cloud, latency can arise when two vm’s co-located on the
same server communicate with each other. This problem is
limited by introducing Nahanni memcached, a port of the
well-known memcached that uses inter-VM shared memory
instead of a virtual network for cache reads [1]. Facebook,
for example employs memcached as one of several caching
layers. .
B. Network Latency
Network latency cause applications to spend amount of time
waiting for responses from a distant data centre, then the
bandwidth may not be fully utilized and performance will
suffer [3]. Network latency is comprised of Propagation
delay, Node delay and Congestion delay .Good network
design can minimize node delay and congestion delay but not
propagation delay [9].
Network delay illustrates how much time it takes for a bit of
data to move across network from one node to another node.
Propagation Delay- The amount of time taken for
head of signal to travel from sender to the receiver and
it can be defined as the ratio between link length and
the propagation speed over the specific medium
Congestion Delay- Network congestion occurs when
a link or node is carrying so much data that its quality
of service deteriorates. Typical effects of congestion
delay include Packet loss, Blocking of new
connections and Queuing Delay.
C. Processing Delay
The processing delay is the time which routers take to process
the data. It is the important component in the network delay.
  
Fig 1.Diagrammatic Representation to estimate total network
Latency
In the above network diagram, consider the distance between
the two local area networks to be 200 miles and assume each
router adds 2ms.Current average network link utilization
without the storage application is 15%.So the amount of
bandthwidth available for new storage network application is
85% (i.e. 100%-15%).
The Distance between end points of network link is 200 miles.
Therefore Round Trip Time propagation delay is 200*2=400
miles or equal to 4ms.Also there are two routers in the path
taken by data. So the estimated round trip node delay is 2
nodes*2ms equal to 4ms.Now with congestion processing
delay, round trip node delay is increased to 4ms/0.85 equal to
5ms.Hence,total network latency(Propagation delay+ Node
delay+ Congestion delay) is 9ms(i.e. 4ms+4ms+5ms)
IV. PERFORMANCE MONITORING AREAS IN CLOUD
In cloud computing, Cloud service providers provide service
to cloud service consumers. Service Level agreements are very
important in cloud environment since the customer pays for
the services, infrastructure he uses [7]. Thus performance
monitoring of cloud should monitor the capability of
component of cloud in delivering expected services
A. Infrastructure Performance
Cloud service providers provide Virtual machines, Storage
network etc as infrastructure service and monitoring the
performance of these components is of paramount importance.
So a new approach called Infrastructure Response time is used
to get the performance of virtual cloud environment.IRT is
defined as the time it takes for any application to place a
request for work on virtual environment and for virtual
LAN
SAN
ROUTER
ROUTER
SAN
LAN
environment to complete the request. The request could be
simple data exchange.
B. Application Performance
It refers to the performance of applications hosted in the cloud.
The Application response time is key metric in application
performance monitoring which calculates the time taken for
any application to respond the user requests
C. Virtualization Performance
It is Similar to the physical machines and the performance
monitoring of Virtualization depends on the number of Virtual
machines used. Virtualization threats also hamper the cloud
performance [5].Other Parameters related to virtual machines
to measure the different performance metrics which include
No of VMs used by the application
Time taken to create a new VM.
Time taken to move an application from one VM to
another.
Time in which additional resources are allocated to
VM.
V. LATENCY AT THE USER END
On the other hand, how latency issues like service disruption
attacks at user end hamper the performance of cloud. For this,
a DDoS based attack (i.e Botnet attack)[6] has been applied on
existing cloud based model and performance has been
evaluated by analyzing the calculated results to see the overall
effect of attack on the cloud.
A. Proposed Architecture
In this section we have firstly created a normal cloud based
scenario [4] where we have taken two applications namely
database application and web based application. A workstation
of 10Base_T LAN object is used to act as the home office
supporting 150 work stations. Two PPP server objects are used
to act as database server and webserver.IP32_cloud object is
used to act as the internet cloud. The application configuration
object is used to define the application and the profile
configuration object is used to define the application profile.
Secondly we have implemented a firewall in the cloud based
scenario with one router acting as a firewall to do the function
of filtering. In the third scenario we have implemented a botnet
attack on the firewall based cloud scenario [2] by increasing no
of workstations from 150 to 250 with 150 users accessing
database
Fig 2.Normal Cloud Scenario
Fig 3.Firewall Security model
Fig 4.Botnet attack on Secure Cloud model
Table 1: Application description
Attribute
Load
Database
High Load
Http
Heavy Browsing
Table 2: Simulated Parameters
Application
Parameter
Unit
Database
Traffic sent
Traffic received
Server DB Query
load
Throughput
Utilization
Bytes/sec
Bytes /sec
Requests/sec
Packets/sec
B. Methodology
The OPNET I.T Guru is used to build the network topology
of the cloud as described in fig 2.The cloud scenario used
across the simulation is done to evaluate the performance of
database cloud application by doing the comparative analysis
under three different scenarios using Botnet attack. For this a
cloud network topology is created, Statistics are chosen to
measure the performance, simulation is made to run and
finally results are analyzed based on simulation results.
VI. EVALUATION AND RESULT ANALYSIS
Fig 5. Database server Traffic Received (Bytes/sec)
In the above result, database server traffic in the form of
bytes/sec receives more bytes of data during botnet attack than
other two scenario’s causing congestion at the database server
end.
Fig 6. Database Server Traffic Received (packets/sec)
Similarly in the above graph, more number of packets are
received at the server end of database during botnet attack
causing heavy traffic as compared to normal and firewall
scenario where packets sent per second are less than botnet
scenario.
Fig 7. Point-to-Point Utilization (IP Cloud to Firewall)
In the above graph, Point-to-point throughput from IP cloud to
Firewall during firewall security was 70 packets per second
whereas during Botnet it was increased to 80 packets per
second.
Fig 8. Point-to-Point Utilization (IP Cloud to Router)
Point-to-Point Utilization from IP Cloud to Router during
Botnet scenario was 4 packets per second where as in other
two scenarios it is 2 packets per second on average
Fig 9. Point-to-Point Throughput (Database Server
to Router)
Fig 10. Point-to-Point Throughput (Router to Database Server)
VII. CONCLUSION
This paper highlighted the effect of latency on domains of
Cloud network and also presented the service disruption effect
due to DDoS attack on cloud network with simulated results.
For future work, we need to develop an Intrusion Detection
System so as to limit the effect of the attack on cloud network.
ACKNOWLEDGMENTS
The making of the paper needed co-operation and guidance of
all members of the department. I therefore feel privileged to
thank all those who have helped to make it successful. It is my
immense pleasure to express my Gratitude to Shivani Khurana
(Assistant Professor of Computer Science Department) as a
guide who provided constructive and positive feedback during
the preparation of the paper.
REFERENCES
[1] Ajith Singh and Hemalatha, Comparative analysis of Low latency on
different bandwidth and geographical locations while using cloud based
applications, Head department of Software systems, Kalpagam
university Coimbatore: IJAET ISSN: 2231-1963, Jan 2012.
[2] Mohammad Heidari,The Role of Modeling and Simulation in
Information Security the Lost Ring, Springer, 1989, vol. 61.
[3] Adam Wolfe Gardon and Paul lu, “Low Latency Caching for Cloud –
based Web applications,” Department of Computer Science, University
of Alberta, Edmonton, Alberta, Canada: Awalfe.Paul@cs.ualberta. ,
Sept. 16, 2011.
[4] Ankush Veer Reddy, ”Usage of Opnet I.T tool to Simulate and test the
security of cloud”(Project id-395) www.sci.tamucc.edu
[5] Nagaraju Kilari and Dr. R.Sridaran,”A Survey on Security Threats for
Cloud computing” International journal of engineering research and
technology (IJERT) Volume.1 Issue7, September-2012.
[6] Raihana,Faizal,ZulAzri,Zaki,SitiRahayu and Robiah,”Revealing the
Criterion on Botnet Detection Technique” International journals of
Computer science issues, vol 10,Issue 2, No 3,March 2013.
[7] Ashraf Zia and Muhammad Naeem Ahmad Khan.”Identifying key
Challenges in Performance Issues in Cloud Computing”.IJMECS, 2012,
10, 59-68 September 2012.
[8] Pardeep Sharma, Sandeep Sood and Sumeet Kaur,”
Cloud Computing issues and what to compute on Cloud” International
Conference on Advanced Computing Communications and Network.
[9] Sonia and Satinder pal Singh.”Analysis of Energy Consumption in
Different types of networks For Cloud Environment” IJARCSSE Vol 2,
Issue 2, Feb 2012 ISSN: 2277 128X
[10] F.chong,G.Carrao,”Architecture strategies for catching the long tail,”
MSDN library,Microsoft Corporation,2006
[11] Cloud Computing Target.com/definition/public cloud.
[12] Cloud Computing Target.com/definition/private cloud.
[13] Cloud Computing Target.com/definition/hybrid cloud.
... In cloud-based applications, latency may lead slow response, performance degradation, and power consumption [26][27][28][29]. Managing edge-cloud latency is to minimize the delay by shifting the processing task to numerous smaller clusters located nearer to the end-user devices [28]. ...
... In cloud-based applications, latency may lead slow response, performance degradation, and power consumption [26][27][28][29]. Managing edge-cloud latency is to minimize the delay by shifting the processing task to numerous smaller clusters located nearer to the end-user devices [28]. Despite significant attempts to enhance network communication and mitigate the effects of network conditions on Machine Learning (ML) applications, there is a need to assess the influence of network latency on their performance, particularly in the context of the irregularities of network conditions in cloud environments [26]. ...
... So in this experiment, the efficiency was 96.4% using Architecture 1 and 96.67% using Architecture 2. This is consistent with the computer network theory that media is a data speed constraint [30]. Especially when utilizing the Internet, where it is not known exactly what media and devices are utilized [26][27][28][29]. ...
Article
Full-text available
The main problem with supervised learning is data labeling, an activity that seems trivial when the data is small, but not if the data is very large, such as LC-MS (Liquid Chromatography-Mass Spectrometry) data. This task requires high concentration and accuracy if done by humans and impacts processing time. This paper discusses a method to automate labeling of LC-MS data to speed up processing time. In this case, webscraping technique is utilized to retrieve the labels because they are stored in an online database. It has been done in previous studies, but the results are not satisfactory because it still takes a long time to get the required label which is the name of the chemical compound. This is due to frequent disconnections. To solve this problem, a local mirror database is built so that it can be accessed locally. We built two system architectures. The first utilizes two separate computers as a server and client. They are connected to the access point. The second is to utilize a single computer, acting as both server and client at the same time. Theoretically, this will reduce the distance and save labeling time. The system architecture has succeeded in labeling the required data and has a time efficiency of 96.4% and 96.67%, respectively, compared to previous studies. This is a massive time saver.
... By measuring application performance under varying network latency scenarios, the research aims to furnish insights into the nuanced relationship between latency and the efficiency of cloud applications. The contributions of the paper include a detailed description of the experimental methodology and an extensive measurement study, offering valuable insights into the dynamic interplay of cloud computing and latency [46]. ...
Article
Full-text available
Cloud computing is a contemporary endeavour to provide computing resources, such as hardware or software, as a service across a network. Cloud computing is a current IT trend that involves shifting computation and data storage from desktop and portable PCs to enormous data centre's that can store massive amounts of information, measured in peta-bytes. Cloud Computing encompasses multiple facets, including availability, scalability, virtualization, interoperability, quality of service, and the delivery types of the cloud, which are private, public, and hybrid. Cloud databases are mostly utilized for data-intensive applications, such as data warehousing, data mining, and business intelligence. A cloud database is necessary to efficiently accelerate the process of reducing the burdens associated with routing configuration. This article conducts a thorough analysis of challenges in cloud computing data management, focusing on dimensions such as consistency, scalability, security, interoperability, migration, and latency. Scholarly investigations address distributed databases, consensus algorithms, encryption, access control, auditing, and the development of harmonious ecosystems for diverse cloud environments. Emphasis is placed on automated migration tools, best practices, and methodologies for smooth transitions, as well as innovative solutions for minimizing latency in real-time applications. The overarching goal is to advance data confidentiality, integrity, system security, and long-term advancements in cloud computing.
... Resource overload occurs when the demand for resources exceeds the available capacity, resulting in performance degradation and potential service disruptions [25]. Network latency, exacerbated by the distributed nature of cloud systems, can result in reduced application responsiveness, impacting the overall user experience [26]. ...
Preprint
Full-text available
Performance issues permeate large-scale cloud service systems, which can lead to huge revenue losses. To ensure reliable performance, it's essential to accurately identify and localize these issues using service monitoring metrics. Given the complexity and scale of modern cloud systems, this task can be challenging and may require extensive expertise and resources beyond the capacity of individual humans. Some existing methods tackle this problem by analyzing each metric independently to detect anomalies. However, this could incur overwhelming alert storms that are difficult for engineers to diagnose manually. To pursue better performance, not only the temporal patterns of metrics but also the correlation between metrics (i.e., relational patterns) should be considered, which can be formulated as a multivariate metrics anomaly detection problem. However, most of the studies fall short of extracting these two types of features explicitly. Moreover, there exist some unlabeled anomalies mixed in the training data, which may hinder the detection performance. To address these limitations, we propose the Relational- Temporal Anomaly Detection Model (RTAnomaly) that combines the relational and temporal information of metrics. RTAnomaly employs a graph attention layer to learn the dependencies among metrics, which will further help pinpoint the anomalous metrics that may cause the anomaly effectively. In addition, we exploit the concept of positive unlabeled learning to address the issue of potential anomalies in the training data. To evaluate our method, we conduct experiments on a public dataset and two industrial datasets. RTAnomaly outperforms all the baseline models by achieving an average F1 score of 0.929 and Hit@3 of 0.920, demonstrating its superiority.
... Even with security mechanisms [8] applied, it is never as secure [9] [10] as compartmentalizing the network, causing issues with data privacy or integrity. Second, reliability is a concern, because network bandwidth or latency can be a major issue in OT [11] causing data to be delayed or even lost. Third, regulatory requirements and compliance issues may also prevent the use of cloud solutions in certain OT industries. ...
Preprint
Full-text available
Industry 4.0 factories are complex and data-driven. Data is yielded from many sources, including sensors, PLCs, and other devices, but also from IT, like ERP or CRM systems. We ask how to collect and process this data in a way, such that it includes metadata and can be used for industrial analytics or to derive intelligent support systems. This paper describes a new, query model based approach, which uses a big data architecture to capture data from various sources using OPC UA as a foundation. It buffers and preprocesses the information for the purpose of harmonizing and providing a holistic state space of a factory, as well as mappings to the current state of a production site. That information can be made available to multiple processing sinks, decoupled from the data sources, which enables them to work with the information without interfering with devices of the production, disturbing the network devices they are working in, or influencing the production process negatively. Metadata and connected semantic information is kept throughout the process, allowing to feed algorithms with meaningful data, so that it can be accessed in its entirety to perform time series analysis, machine learning or similar evaluations as well as replaying the data from the buffer for repeatable simulations.
... The downside of overall cloud computing is the availability and performance of the network connection [10]. We have also considered the latency factor while selecting the AWS region for our JUX deployment, along with other factors. ...
Chapter
The Internet of Things (IoT) refers to billions of smart objects that now are hooked to the internet, capturing and transmitting information throughout the world, and as a result, have to compute trillions of data in mere seconds. IoT devices have low storage and computing power of their own due to which the concept of the Cloud has emerged exponentially. Cloud Computing proves to be a prime alternative for this issue since it can provide vast amounts of data storage and processing capacity to exercise complex computation. With the adoption of this new technology comes certain outcomes among which a pronounced one is latency delays while transferring the IoT data over to the cloud, which has a huge impact on the business world as well. In this paper, we propose a gap optimization algorithm for the ongoing data delay problems faced by both cloud service consumers and providers. The algorithm minimizes the data put off through factoring latency and the use of the dependencies to shape the IoT statistics supply with a corresponding cloud server, with minimum latency put off. The paper also outlines the importance of fog and edge computing in significantly reducing latency, therefore, decreasing operational costs with it.KeywordsIoTLatencyCloudBandwidthFog computingEdge computingTraffic shaping
Article
Full-text available
Emerging technologies like IoT (Internet of Things) and wearable devices like Smart Glass, Smart watch, Smart Bracelet and Smart Plaster produce delay sensitive traffic. Cloud computing services are emerging as supportive technologies by providing resources. Most services like IoT require minimum delay which is still an area of research. This paper is an effort towards the minimization of delay in delivering cloud traffic, by geographically localizing the cloud traffic through establishment of Cloud mini data centers. The anticipated architecture suggests a software defined network supported mini data centers connected together. The paper also suggests the use of segment routing for stitching the transport paths between data centers through Software defined Network Controllers.
Article
Low latency and high availability in cloud services give users satisfactory response time and guarantee stability to request they make to services that are hosted in the cloud, thus increasing the usability and reliability of cloud services. On the other hand, high latencies and poor availability will cost businesses their customers due customers dissatisfaction, thus losing customers to competitors. This situation noticeable in e-commerce businesses where real-time response and decision making within seconds are critical for business service delivery. Therefore, latency and availability are important parameters in the Service Level Agreement for cloud users when choosing Cloud Services Providers (CSPs). But the challenge for businesses is that they do not have their in-house mechanism that can accurately predict the required latency and availability for their requirements. Companies only rely on the CSPs tools for estimating resource requirements, which is biased towards the CSPs business model. In this paper, we developed a deep learning algorithm that predicts the latency and availability of cloud services using real-time live data from three CSPs. We designed and implemented experiments on Amazon Web Services, Alibaba Cloud and Tencent Cloud in Beijing University of Posts and Telecommunications to run compute instances across the United States, Europe and Asia Pacific regions. In each cloud platform, five servers were used that resulted in 30,815,100 invocations of http and ping operations for 6 weeks. The algorithm used the data on hourly, daily and weekly basis as historical network data to predict latency and availability. We used MATLABs deep learning toolbox for the implementation of our algorithm and the results showed that the prediction is usually above 90% accurate as compared with the data obtained. The results also revealed that latency performance depends on the locations of users and the availability depends on number of availability zones used.
Article
Full-text available
Cloud Computing provides an efficient and flexible way for services to meet escalating business needs. Cloud-shared infrastructure and associated services make it cost effective alternative to traditional approaches. However, they may also introduce security breaches and privacy issues. As more cloud based applications keep evolving, the associated security threats are also growing. Many research works on cloud security exist in partial forms of either specifically on cloud issues or virtualization-related security issues. In this paper, an attempt has been made to consolidate the various security threats in a classified manner and to illustrate how cloud and virtualization vulnerabilities affect the different cloud service models.
Article
Full-text available
Cloud computing is a harbinger to a newer era in the field of computing where distributed and centralized services are used in a unique way. In cloud computing, the computational resources of different vendors and IT services providers are managed for providing an enormous and a scalable computing services platform that offers efficient data processing coupled with better QoS at a lower cost. The on-demand dynamic and scalable resource allocation is the main motif behind the development and deployment of cloud computing. The potential growth in this area and the presence of some dominant organizations with abundant resources (like Google, Amazon, Salesforce, Rackspace, Azure, GoGrid), make the field of cloud computing more fascinating. All the cloud computing processes need to be in unanimity to dole out better QoS i.e., to provide better software functionality, meet the tenant's requirements for their desired processing power and to exploit elevated bandwidth.. However, several technical and functional e.g., pervasive access to resources, dynamic discovery, on the fly access and composition of resources pose serious challenges for cloud computing. In this study, the performance issues in cloud computing are discussed. A number of schemes pertaining to QoS issues are critically analyzed to point out their strengths and weaknesses. Some of the performance parameters at the three basic layers of the cloud — Infrastructure as a Service, Platform as a Service and Software as a Service — are also discussed in this paper.
Article
Many Web applications are now hosted in elastic cloud en-vironments where the unit of resource allocation is a virtual machine (VM) instance; entire VMs are added or removed to scale up or scale down. A variety of techniques can reduce the latency of communication between VMs co-located on the same server in, say, a private cloud. For example, par-avirtualized network mechanisms (e.g., vhost and virtio in Linux KVM) can optimize the number of protection bound-ary crossings. Inter-VM shared memory can further reduce boundary crossings after setting up a shared region. We present the design, implementation, and an evalua-tion of Nahanni memcached, a port of the well-known mem-cached that uses inter-VM shared memory instead of a vir-tual network for cache reads. As a widely deployed cache for back-end datastores and databases, memcached's latency is important to the performance of many well-known web sites (e.g., Facebook, Twitter) and cloud platforms (e.g., Google's App Engine). Although using shared-memory IPC is a well-known strategy, the recent introduction of the ivsh-mem inter-VM shared memory mechanism (also known as Nahanni) to Linux KVM makes the strategy practical for virtual machines. Using the Yahoo Cloud Serving Bench-mark, we confirm the intuition that Nahanni memcached can reduce the latency of cache read operations by up to 86%, and that given reasonable hit rates, this can reduce the total latency of read-related operations for a workload by up to 45% compared to standard memcached. When using the experimental paravirtualized vhost networking mechanism in Linux KVM, Nahanni memcached offers a smaller, but still significant, advantage of 29%.
The Role of Modeling and Simulation in Information Security the Lost Ring
  • Mohammad Heidari
Mohammad Heidari,"The Role of Modeling and Simulation in Information Security the Lost Ring", Springer, 1989, vol. 61.
Revealing the Criterion on Botnet Detection Technique
  • Faizal Raihana
  • Zulazri
  • Zaki
  • Robiah Sitirahayu
Raihana,Faizal,ZulAzri,Zaki,SitiRahayu and Robiah,"Revealing the Criterion on Botnet Detection Technique" International journals of Computer science issues, vol 10,Issue 2, No 3,March 2013.
Usage of Opnet I.T tool to Simulate and test the security of cloud
  • Ankush Veer Reddy
Ankush Veer Reddy, "Usage of Opnet I.T tool to Simulate and test the security of cloud"(Project id-395) www.sci.tamucc.edu
Comparative analysis of Low latency on different bandwidth and geographical locations while using cloud based applications Head department of Software systems, Kalpagam university Coimbatore: IJAET ISSN
  • Ajith Singh
A Survey on Security Tlireats for Cloud computing" International journal of engineering research and technology (IJERT) Volume. 1 Issue7
  • Nagaraju Kilari
  • Dr R Sridaran