Conference PaperPDF Available

Disaster Recovery in Single-Cloud and Multi-Cloud Environments: Issues and Challenges

Authors:

Abstract and Figures

Information Technology (IT) data services provided by cloud providers (CPs) face significant challenges in maintaining services and their continuity during a disaster. The primary concern for data recovery (DR) in the cloud is finding ways to ensure that the process of data backup and recovery is effective in providing high data availability, flexibility, and reliability at a reasonable cost. Numerous data backup solutions have been designed for a single-cloud architecture; however, making a single copy of data may not be sufficient because damage to data may cause irrecoverable loss during a disaster. Other solutions have involved multiple replications on more than one remote cloud provider (Multi-Cloud). Most suggested solutions have proposed obtaining a high level of reliability by producing at least three replicas of the data and either storing all replicas at a single location or distributing them over numerous remote locations. The drawbacks to this approach are high costs, large storage space consumption and (especially in the case of data-intensive cloud-based applications) increased network traffic. In this paper, we discuss the issues raised by DR for both Single-Cloud and Multi-Cloud environments. We also examine previous studies concerning cloud-based DR to highlight issues that researchers of cloud-based DR have considered to be most important.
Content may be subject to copyright.
Disaster Recovery in Single-Cloud and Multi-Cloud
Environments: Issues and Challenges
Mohammad M. Alshammari, Ali A. Alwan, Azlin Nordin, Imad Fakhri Al-Shaikhli
Department of Computer Science, International Islamic University Malaysia,
Kuala Lumpur, Malaysia
mdshammari@gmail.com, {aliamer, azlinnordin & imadf}@iium.edu.my
Abstract - Information Technology (IT) data services provided by
cloud providers (CPs) face significant challenges in maintaining
services and their continuity during a disaster. The primary
concern for data recovery (DR) in the cloud is finding ways to
ensure that the process of data backup and recovery is effective in
providing high data availability, flexibility, and reliability at a
reasonable cost. Numerous data backup solutions have been
designed for a single-cloud architecture; however, making a single
copy of data may not be sufficient because damage to data may
cause irrecoverable loss during a disaster. Other solutions have
involved multiple replications on more than one remote cloud
provider (Multi-Cloud). Most suggested solutions have proposed
obtaining a high level of reliability by producing at least three
replicas of the data and either storing all replicas at a single
location or distributing them over numerous remote locations. The
drawbacks to this approach are high costs, large storage space
consumption and (especially in the case of data-intensive cloud-
based applications) increased network traffic. In this paper, we
discuss the issues raised by DR for both Single-Cloud and Multi-
Cloud environments. We also examine previous studies
concerning cloud-based DR to highlight issues that researchers of
cloud-based DR have considered to be most important.
Index Terms - Disaster recovery, data backup, cloud computing,
single-cloud, multi-cloud
I.
I
NTRODUCTION
In recent decades, a significant increase in the use of cloud
computing has occurred in many medium and large sized
organizations. Cloud computing delivers numerous benefits,
including reduced cost and data accessibility. Small- and
medium-sized companies use cloud computing services for a
variety of tasks because these services provide fast access to
applications and a reduction in infrastructure costs [1].
Protecting the integrity and privacy of cloud-based data
services is an important concern for cloud computing because
the information stored with cloud storage providers may be
highly sensitive and the providers may not be trustworthy [1].
The main reasons to use cloud computing in most modern firms
and organizations are to reduce the total cost of ownership of
infrastructure and to enjoy IT benefits. A distinguishing
characteristic of the cloud is that it can store data while ensuring
their availability, which is an important feature when storing
sensitive information. Current approaches designed for data
backup and recovery for Single-Cloud environments employ
vast amounts of storage space due to the creation of multiple
replicas in numerous data centers [2].
The use of a Single-Cloud paradigm can generate risks,
including hardware faults and software errors, natural disasters,
and damage with a human origin. This can lead to service
disruption or a total loss of data and a system collapse [3, 5].
The development of a cloud-based system is not
recommended without considering the risks, which may be
particularly pronounced when only one data center is involved.
Some cloud service providers address risks via practical
measures, including the geographical dispersion of data;
however, data centers in different locations are operated by a
Single-Cloud service provider. They usually use the same
infrastructures and software stacks and have a similar or
identical operational process and management team [3].
Disaster Recovery (DR) is the process that an organization
undergoes after a service disruption to resume normal services.
Providers of disaster recovery services can utilize multiple
clouds. Using a minimum of two clouds (or more) is a way of
reducing the risk of a failure in service availability, data loss,
and compromised privacy. Simultaneously using multiple
clouds can reduce the risk when using a public cloud for
applications and data. The most common barriers to adoption
of the cloud are cost, security, reliability, and loss of control.
Use of a Multi-Cloud environment enables an organization to
enjoy greater flexibility and control and decide which
workloads are going to be run and where they should be run [4].
In this paper, we present and discuss vital issues in Single-
Cloud and Multi-Cloud environments and examine previous
studies related to cloud DR to highlight problems that
researchers of cloud DR have considered to be most important.
The remainder of the paper is organized as follows: An
overview of disaster recovery is presented in Section II. This
overview addresses the disaster recovery process and issues
related to disaster recovery. Section III provides an overview of
the cloud computing paradigm, including Single-Cloud and
Multi-Cloud architectures. Section IV discusses the process of
disaster recovery in the cloud. Some notable related studies are
reported and examined. A discussion is included in Section V
concentrating on some critical issues and challenges relevant to
disaster recovery in Cloud paradigm. Some conclusions have
been drawn and future study directions are described in Section
VI.
II.
D
ISASTER
R
ECOVERY
O
VERVIEW
A disaster is an event with incapacitating or destructive
effects that compromise a system’s operational availability for
an unacceptable time period. Disasters are more severe than a
general system failure and have a greater impact. Failure of a
system may not reduce the capability of the system. Disasters
that cannot be addressed by systems for the prevention of
failure are usually produced by a catastrophic event, such as
fire, flood, severe weather conditions, or (usually malign)
human intervention [2, 10]. In an IT company, when business
operations are interrupted by a disaster, one aspect of disaster
recovery will usually comprise the use of additional
infrastructure.
Disaster Recovery (DR) has two aspects: the first aspect is
to reduce data loss to a minimum, and the second aspect is to
recover when data loss occurs despite all precautions [2]. Loss
of data can cause damage to an organization’s reputation and
financial and legal negative implications. A data recovery plan
is very important to prevent a business from incurring
reputational damage and/or additional costs. Most businesses
depend on data to give them a competitive advantage, which
allows them to prosper[2 -3]. Disaster recovery is usually
handled by an IT department because the main concern of DR
is to recover data and systems post-breach, which is true
regardless of whether the breach was caused by a natural
disaster (flood, weather, or fire) or data theft, malware, power
outage, or other human-originated problem. A Disaster
Recovery Plan (DRP) is necessary to ensure that everyone
knows in advance what actions are necessary after a disaster. A
DRP is a document that specifies the steps to minimize impact
and enables services to resume as quickly as possible after a
disaster occurs [2-3].
Any functioning enterprise requires a DRP. Regardless of
whether a disaster is natural or man-made, disasters can cause
an expensive interruption to service. Traditional and cloud-
based DR models can often be employed to avoid a network
breakdown. As shown in Fig. 1, the cloud can provide benefits
that are both shared and dedicated and high-speed, low-cost DR
[7].
Advance preparation is essential because a disaster can
occur anywhere at any time with either no warning or limited
warning [7]. For this reason, organizations will generally seek
to improve their IT infrastructure [7]. The results of a disaster
and the process of disaster recovery can be improved via cloud
computing [9].
For an IT department, recovery usually provides the means
for operations to continue while maintaining suitable channels
of communication throughout an organization and assisting in
the operation of subsidiary systems. For a business, recover
enables a business to perform the business functions and
subsidiary functions for which the organization exists [7].
A. Disaster Recovery
Disaster recovery may be regarded as a set of predefined
procedures and policies that are designed to enable the
restoration of critical business processes and systems after a
disaster [10]. A DRP also enables organizations to rebuild their
systems after hardware or software failure. Disaster recovery
and fault tolerance are not equivalent; fault tolerance ensures
that operations will continue after the failure of one or more of
a system’s components, whereas disaster recovery is concerned
with a significant large-scale failure [10].
DR is concerned with an event’s immediate impact. DR can
include recovery from a breach of security, a tornado, or a
server outage. A disaster recovery plan typically includes
numerous predefined steps to ensure rapid implementation [10].
The plan must also consider that the events of a disaster almost
always slightly differ from expectations.
When designing a data recovery plan, numerous parameters
should be considered. A well-defined data recovery plan is
constructed within the context of several parameters, including
Critical Business Function (CBF); Maximum Acceptable
Outage (MAO); Recovery Time Objectives (RTOs); and
Business Impact Analysis (BIA) [11]. CBF includes functions
that are critical to an organization; if they fail, the organization
is unable to perform critical operations. A link exists between
this issue and MAO; this link is defined as the maximum time
that a function can be unavailable without impacting an
organization’s mission. The maximum time before recovery
must be effective if the organization is to continue. RTOs must
be either greater than or equal to MAO, as RTO is the time
period within which recovery is to be completed. A BIA is a
form of risk analysis since it examines CBFs and MAOs to
identify the impact of the failure of an IT function on a business.
A BIA also defines the priority of recovery attempts.
Fig. 1 Comparison between traditional DR models and cloud DR models
B. Issue of Disaster Recovery
Disaster recovery has numerous challenges even where a
DRP exists and personnel have been trained in its
implementation. DR may not proceed as expected and may
prove to be insufficient after the occurrence of a disaster. The
magnitude of a natural disaster cannot be forecast. An
organization may encounter more than one disaster at a time;
planning for this eventuality is almost impossible. As a result,
some loss of data will occur [2- 3, 15]. DR requires
collaboration among numerous departments; simply sharing
correct information among groups may prove difficult.
Maintaining communication during and immediately after a
disaster is challenging. In some cases, the information quickly
becomes obsolete; in other cases, the routes of communication
may collapse [12]. If a DRP is to be successfully implemented,
knowledge and authority, which can prove difficult to maintain,
are required. For these reasons, DRPs generally have test plans
in which disasters of various levels are simulated; the analysis
of the results can help improve the plan [3, 5, 12, 15].
More significant recovery problems occur when backups
are corrupted or the backup process fails. Some organizations
that experience large-scale data corruption and/or disruption
discover that recovery does not offer the expected solutions
because the media are unusable or the data cannot be recovered
[10]. The best approach to handle this risk is a process that
regularly calls data from the backup medium and restores and
examines the data. This process should investigate the reasons
for any test failure to identify and eliminate shortcomings in the
process of conducting backups. An eight-stage process exists
for correcting faults in the DR process: recognition, reaction,
recovery, restoration, return to normal, rest and relax, re-
evaluation and re-documentation. First, the purpose of a
recovery process and its boundaries should be examined. This
step should include staff safety and Critical Business Functions
(CBFs). When a disaster is announced, the necessary personnel
should be informed, and implementation of the DR steps should
begin. Responses to the disaster should originate with
management and the personnel charged with disaster recovery.
They decide the course of action, after which critical systems
should be recovered. If necessary, relocation to another facility
may be activated at this time. Systems that were identified in
advance are then restored. After the support systems and
utilities are restored, a controlled return to normal operations
should occur. When the exercise has been completed, an
evaluation of the entire process should be performed to
determine the strengths and weaknesses of an organization in
response to a disaster [2- 5, 7, 9, 10].
III.
C
LOUD
C
OMPUTING
O
VERVIEW
Advancements in cloud computing have revealed that the
storage and security of information have undergone significant
changes. Data are stored and apps are run on a number of
different computers and servers, and customers can access data
from a variety of locations. Service providers separate users
from the infrastructure that underlies the entire process via
flexible service delivery. The flexibility, scalability, reliability
and cost-effectiveness of cloud computing add to its relevance
in data recovery. However, the Internet is an open network on
which information is shared, which increases the risks with
regard to privacy and security. Numerous approaches address
this problem: clustering servers, distributed computing, and
wide-area networking [10].
Although cloud computing is one of the most discussed
aspects of international business, it is also one of the least
understood aspects. One approach to the cloud is to consider
cloud computing as a massively distributed parallel system that
contains numerous virtualized servers and computers that are
connected. Dynamic provisioning indicates that these scattered
devices usually appear to the user as a single computing
resource or multiple but unified computing resources. We will
now consider the development of cloud computing [8].
The main reason for moving from a Single-Cloud to Multi-
Cloud architecture is to solve problems of security and protect
sensitive data. In this section, we compare Single-Cloud and
Multi-Cloud architectures [1, 4].
A. Single-Cloud Architecture
In a Single-Cloud architecture, a client takes an order from
one service provider, either from a single data center or multiple
data centers, as demonstrated in Fig. 2.
Fig. 2 Single-Cloud architecture
B. Multi-Cloud Architecture
In a Multi-Cloud architecture, a client takes an order from
multiple service providers, as illustrated in Fig. 3.
Fig. 3 Multi-Cloud architecture
IV.
D
ISASTER
R
ECOVERY
I
N
T
HE
C
LOUD
A DRP and the existence of a set of measures that are
appropriate to an organization are necessary for the long-term
success of the organization. Because involvement can be
significant and investment in a DRP without immediate
observable benefits may receive objections, cloud-based
backup and recovery, which has lower costs than other
approaches, has become very common. Virtualization
technology has become independent of the hardware on which
it runs, which enables organizations to migrate data, operating
systems and software tools to the cloud and obtain improved
financial performance. Rapid increases in bandwidth and the
scalability of services enable the rapid start of recovery
processes. The bulk of operations can be restarted within hours
after a disaster, which is reliant on the compatibility of an IT
structure with a cloud-based DR. The bulk of backup processes
and recovery processes is usually automated, which indicates
that a minimum amount of human intervention is required [22,
25].
The most important reason for implementing DR via the
cloud is increased resilience [12]. The majority of providers of
cloud services enable their customers to rapidly recover from
disasters with a minimum of disruption via the use of a
geographically distributed data backup and redundancy model.
As an example, mission-critical applications are stored on the
Amazon cloud at numerous data centers in diverse geographical
locations. Note that Amazon employs the “fail gracefully”
design. Short-term outages at one location cause immediate
notification to a customer and an automatic switch of an
application to another location. Failure of any processes and
interfaces that rely on this application is prevented by
downstream circuit breakers.
One thing that DR services must ensure is service continuity
to enable applications to be quickly restored after a disaster
[13]. DR services form part of the Business Continuity (BC)
regime that regulates how IT systems and procedures are
recovered. To be effective, DR requires regular back-up of
information that is essential to a business and its secure
retention in more than one location.
A. Disaster Recovery Challenges in the Cloud
Cloud computing is a central part of operations for
numerous organizations due to its ability to provide resources
that are cost-effective, efficient and reliable. The user buys
resources from the service provider and pays for them either as
used or as needed. Benefits to the user include reduced cost, fast
implementation, flexibility, and dynamic scalability. By
offering services and resources that may be utilized on demand
in a self-service environment, are scalable and paid for as used,
cloud computing reduces capital and operational outlay for both
software and hardware. Despite these benefits, cloud computing
has not been embraced as expected due to concerns about
security and other challenges. The main reasons for not utilizing
the cloud include security and privacy concerns. Services
provided by a cloud provider include Infrastructure as a Service
(IaaS), Software as a Service (SaaS), and Platform as a Service
(PaaS), as well as a variety of other services, including storage.
Some organizations have concerns about giving user data to
organizations that provide these services. Numerous challenges
prevent an enterprise from moving to the cloud; the most
significant challenges are as follows [1- 3, 10, 22, 25]:
Dependency: Cloud service users are not in control of their
data or the system. Backup of data is executed by and at the
service provider. Customers may be concerned about
dependency on cloud service providers and the risk of data loss.
The selection of trustworthy service providers is a central
concern for organizations that consider a move to the cloud
[13].
Cost: Operating costs decrease after a switch to a data recovery
service; this low cost is a key attraction for users. A cloud
service provider will always seek a cheaper method for
delivering effective recovery processes. The cost of DR systems
entail the following three components [15]:
Start-up and implementation costs, which are amortized
over a period of years
Ongoing operating costs of data transfer, storage, and
processing
Potential disaster cost. Recovered disasters and
unrecoverable disasters have significant costs.
Failure Detection: The length of time that is required to detect
a failure will have a strong effect on the length of time that the
system is inoperable. Immediate detection and reporting of
failures is essential. However, where multiple backup sites are
involved, immediately distinguishing between a disruption of
service and a network failure may prove difficult.
Security: Disasters have two varieties: natural disasters and
human-made disasters. Examples of natural disasters include
earth-quack, flood, hurricane, and so on [2- 3, 7, 10]. An
example of the latter category is cyber-terrorism; cyber-terrorist
attacks are initiated for a variety of reasons. The protection of
important data and rapid data recovery are key elements in any
decision to adopt a DR service.
Replication Latency: DR backups are created by replication.
Two main types of replication exist: synchronous replication
and asynchronous replication. Each type has advantages and
drawbacks. Synchronized replication ensures a high level of
RPO and RTO; however, synchronized replication costs more
than asynchronous replication, and its large overhead can
impact system performance. The larger the number of tiers in
the web application is, the more serious this becomes because
of marked increases in the Round Trip Time (RTT) between the
primary site and the backup site. Although it costs less,
asynchronous replication does not deliver the same high level
of DR service quality. Users need to balance cost against
performance considering the requirements of their particular
situation; however, replication latency is a hindrance for anyone
considering a move to the cloud [2-3, 23].
Data Storage: Cloud services offer an adequate solution to the
perennial enterprise problem of data storage. As cloud usage
increases, the amount of data required for storage also
increases. Cloud storage services can reduce an organization’s
costs by eliminating the need for investment in conventional
data storage devices. Cloud storage services also offer greater
flexibility. A cloud storage system’s architecture has four
layers: physical storage, infrastructure management,
application interface and access layer. Successful running of
applications requires that computing is distributed, whereas
data security demands that storage is centralized. The result is
that the security of data stored by cloud service providers is at
risk of failure and a single storage point [16].
Lack of Redundancy: In the event of a disaster, a typical
procedure is the activation of a secondary site because the
primary site is no longer available. When this activation occurs
in the cloud, neither synchronous nor asynchronous replication
to a backup site is possible and only local storage is available,
with a consequent threat to the system. This problem disappears
when the primary site is recovered; however, the best DR
solutions (especially in services that demand high data
availability, such as storage of business data) will consider all
risks and their possible implications [3, 23].
B. Disaster Recovery in Single- and Multi-Cloud
Configurations
In this section, we examine previous research related to the
issues of data recovery in cloud computing and highlight the
strengths and the weaknesses of each study.
Pokharel et al [17] suggested that high availability, a high
likelihood of survival, and minimal downtime in the event of a
disaster can be obtained at very low cost via a Geographical
Redundancy Approach (GRA). They utilized the Markov
model to analyze their approach but did not consider two
important performance metrics: RTO and RPO.
Wood et al [13] comprehensively reviewed the current
literature and practice in the field of disaster recovery and
included all factors that can influence the DR process. They
defined three types of DR mechanisms, in which backup sites
could be hot, warm, or cold. They discussed the occurrence of
failover and failback during a disaster, as well as methods for
returning control to the primary site post-recovery and
maintaining critical service business continuity.
Jian-Hua and Nan [18] provided a description of a cloud
storage architecture and illustrated the achievement of true
cloud computing by deploying applications, including disaster
recovery in inter-private cloud storage. In this case, local data
were stored in the Storage Service Provider (SSP)’s online
storage space. The demand for users who apply this approach
to build data centers would be nonexistent. Cloud storage would
also avoid storage platform duplication with a consequent
saving of hardware and software infrastructure.
Javaraiah, V. [14] proposed a low-cost approach in which
data were backed up on the user’s premises. This approach
provided a method of handling complex problems via a
mechanism that also allowed online data backup. It addressed
online backup and removed any dependency on providers of
cloud services. The experimental results indicated that a low-
cost backup process was possible and that transferring from one
service provider to another service provider was simple.
However, the study only considered Single-Cloud disaster
recovery. Keeping business services running during and after a
disaster was not addressed.
The Data Distribution Plan for multi-site DR (DDP-DR)
was described by Sengupta and Annervaz [2, 24]. Backup data
can be stored in numerous data centers, including public cloud
centers. Constraints in customer policy and constraints of the
infrastructure are considered before possible data distribution
plans are calculated. This approach is optimal in the backup of
critical business data to geographically distributed data centers.
Grolinger, Capretz, Mezghani, and Exposito [19] suggested
the use of knowledge acquisition and knowledge delivery to
enable integration. The design stage of Disaster Cloud Data
Management (Disaster-CDM) has not been completed. The
research only established part of the process for simulated data
acquisition. The most significant knowledge delivery charge is
to semantically integrate a range of data sources. One of the
implications is the provision of Knowledge as a Service (KaaS).
Similarly, a disaster recovery framework based on the
Moodle e-learning system was proposed by Togawa and
Kanenishi [20]. Their research examines the results of a worst-
case natural disaster scenario—an earthquake with the epicenter
in a specific Japanese region that occurs in the next 30 years.
Their contention was that building an e-learning system disaster
recovery framework is essential. They provided the results of
their framework research but did not include cost, RTO, or RPO
in the performance metrics that they employed to evaluate their
proposed framework.
Saquib et al. [21] proposed DR as a cloud service for
database applications to ensure quick recovery and prevent data
loss. Their solution would yield an RPO of zero and a negligible
RTO via the use of a database server and iSCSI-based
synchronous/semi-synchronous block replication, as well as
automatic failover and failback with scalability leading to a
cost-effective Disaster Recovery as a Service (DRaaS) solution.
The contention of Suguna and Suhasini [22] was that
insufficient data exist to build complete analytical models that
can determine optimal implementation. They suggested that an
adequate amount of data must be collected to enable the
development of models to define the problem in mathematical
optimization terms. The purpose of their work was to establish
numerous techniques for cloud-based data backup and DR
systems.
Sengupta and Annervaz [23] proposed multi-site DR data
distribution with theory, the system architecture, details of
required data centers and costs. Consolidating RPO and RTO
theories with the results from experiments demonstrated a low-
cost DR. Their work also established a plan to replicate backup
data across a large number of data centers and illustrated plans
for data distribution for single customer and multiple customer
situations.
The Multi-Cloud disaster recovery model proposed by Gu,
Y. et al. [3] combines a multiplicity of cloud providers with a
single customer interface. The work does not address how data
service continuity is to be maintained in the cloud during and
after a disaster.
The work contributed by Victor Chang and Gary Wills [24]
introduced a comparison of storage in the cloud with non-cloud
storage. The users were biomedical scientists, and the objective
was to obtain methods for improving performance and
efficiency. Factors that degrade performance were identified as
job failures, file size, and network latency. Some experiments
have been performed to measure the impact of these factors.
Organizational Sustainability Modeling (OSM) was employed
before, during and after the experiments to ensure fair
comparisons. However, several limitations of this study have
been outlined: only a Single Cloud environment was addressed,
and the performance of services during and after a disaster were
not examined.
The work presented by Sambrani, Y., & Rajashekarappa
[25] concentrated on security as it affects data recovery. They
proposed an algorithm with two main objectives, namely, to
provide cloud users with the highest possible security level and
to ensure data recovery after a natural disaster. Their study also
considered factors such as data integrity, cost, the time required
to recover data, and data recovery efficiency. The experiments
were performed with respect to two different performance
metrics: RTO and cost.
A simple disaster recovery service using multiple cloud
service providers was proposed by Prathyakshini, Ankitha K
[26]. The service providers are located at numerous sites to
ensure that failure in any part of the cloud can be counteracted
by recovery from the remaining cloud interfaces. This approach
has high availability; the research results are presented in the
paper, but the performance metrics RTO and cost are not
considered.
V.
D
ISCUSSIO N
The emergence of cloud computing services has
transformed organizations’ applications and data from an
internal process to an international process. The architecture of
a Single-Cloud environment, which is an integrated
environment, comprises the cloud platform, storage, and
infrastructure and security problems that arise when
applications and data are handled by a Single-Cloud provider.
Sensitive data, such as health records, are uploaded for storage
in the cloud; however, users have no control over the data and
cannot determine whether the data are being misused. The
owners of data stored in a Single-Cloud environment cannot
ensure the security of the data. The cloud is entirely controlled
by the service provider. Establishing trust with a cloud service
provider is an important factor in the decision to move to the
cloud. Among other factors that may cast doubt on the security
of data in the cloud include employees who may be honest but
curious and employees who may compete with or disrupt the
data owner. Some cloud service providers and employees of
cloud service providers have interfered with sensitive data
stored in the cloud. A Single-Cloud environment may also have
problems of a high cost of management of large amounts of data
and/or problems with the loss of data.
Spreading data and applications across a number of separate
clouds ensures that data are secure. This Multi-Cloud
architecture allows high data availability and enables loads to
be balanced, resources to be managed and data to be securely
stored. In this system, the trustworthiness of a service provider
may be less important, as no Single-Cloud architecture contains
sufficient information to enable data to be decrypted without
access to other parts of the data in other clouds. The data owner
may be confident that the cloud owner will secure their data.
This model also simplifies integration between private clouds
and public clouds for greater transparency and flexibility and
efficient management of resources. Encryption and decryption
while using homomorphism are necessary in this model. The
management layer is an integral part of the architecture of a
Multi-Cloud system and enables the sharing of data and
applications, integration of the results, load balancing, and other
features.
Workload Manager, which is a previous implementation of
a Multi-Cloud architecture, did not share the process load
among a multiplicity of clouds. Each individual cloud was only
responsible for the execution of its process.
Security is vital in cloud computing if sensitive data are to
be protected against misuse or other forms of interference. Data
owners who upload sensitive data to the cloud need to take
advantage of features of the cloud, such as scalability, device
independence, and remote access. Cloud user data are under the
complete control of the cloud service provider, who may be
honest but curious (HBC). The security of data in a Single-
Cloud environment must always be doubtful, as data theft can
occur when any part of the cloud is compromised. Data theft
does not occur when encrypted data are distributed among a
number of clouds because a Single-Cloud environment is
inadequate for a malignant influence to obtain all data. A Multi-
Cloud architecture, therefore, provides the necessary protection
of data in the cloud.
High Availability: One important metric for service quality
under a Service Level Agreement (SLA) is uptime. In a Multi-
Cloud architecture, when one part of the network runs slow, the
Multi-Cloud Management Layer (MCSL) transfers the process
to another cloud, which ensures that the uptime specified in the
SLA is actually delivered.
VI.
C
ONCLUSION
This paper has presented issues concerning data recovery as
they apply to Single-Cloud and Multi-Cloud architectures.
Previous studies of cloud-based data recovery have been
presented and discussed in details. The purpose of the paper was
to address critical problems that have been addressed by
researchers of cloud-based data recovery. We plan future
research that focuses on the implementation of the most
efficient framework that manage disaster recovery in Multi-
Cloud architecture. Furthermore, we intend to evaluate the
performance of the suggested approaches when RTO and RPO
are considered.
R
EFERENCES
[1] Pareek, P. (2013). Cloud Computing Security from Single to Multi-Clouds
using Secret Sharing Algorithm. International Journal of Advanced
Research in Computer Engineering & Technology (IJARCET).
[2] Sengupta, S., & Annervaz, K. M. (2012). Planning for optimal multi-site
data distribution for disaster recovery. Lecture Notes in Computer Science
(including Subseries Lecture Notes in Artificial Intelligence and Lecture
Notes in Bioinformatics).
[3] Gu, Y., Wang, D., & Liu, C. (2014). DR-Cloud: Multi-Cloud based disaster
recovery service. Tsinghua Science and Technology.
[4] Sulochana, M., & Dubey, O. (2015). Preserving Data Confidentiality Using
Multi-Cloud Architecture. Procedia Computer Science.
[5] Robinson, G., Vamvadelis, I., & Narin, A. (2014). Using Amazon Web
Services for Disaster Recovery. Whitepaper, (January).
[6] Lenk, A., & Tai, S. (2014). Cloud standby: Disaster recovery of distributed
systems in the cloud. Lecture Notes in Computer Science (Including
Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), 8745 LNC Lenk.
[7] Prazeres, A., & Lopes, E. (2013). Disaster Recovery – A Project Planning
Case Study in Portugal. Procedia Technology, 9, 795–805.
[8] Prakash, S., Mody, S., Wahab, A., Swaminathan, S., & Ramani. (2012).
Disaster Recovery Services in the Cloud for SMEs. International Conferenc e
of Cloud Computing, Technologies, Applications Management, 139–144.
[9] Chidambaram, J., Prabhu, C., Rao, P., Wankar, R., Aneesh, C. S., &
Agarwal, A. (2008, November). A methodology for high availability of data
for business continuity planning/disaster recovery in a grid using replication
in a distributed. In TENCON 2008-2008 Region 10 Conference (pp. 1-6).
[10] Al-shammari, M. M., & Alsaqre, F. E. (2012). IT Disaster Recovery and
Business Continuity for Kuwait Oil Company (KOC). International
Conference on Information Technology, System and Management (ICITSM
2012), 25–26.
[11] Podaras, A., & Zizka, T . Criticality estimation of IT business functions with
the Business Continuity Testing Points method for implementing effective
recovery exercises of crisis scenarios. International Journal of Computer
Science Issues, 2013
[12] Jaiswal, V., Sen, A., & Verma, A. (2014). Integrated Resiliency Planning
in Storage Clouds. IEEE Transactions on Network and Service
Management, 11(1), 3–14.
[13] Wood, T., Cecchet, E., Ramakrishnan, K., Shenoy, P., Van Der Merwe, J.,
& Venkataramani, A. (2010). Disaster recovery as a cloud service:
Economic benefits & deployment challenges. 2nd USENIX Workshop on
Hot Topics in Cloud Computing. Boston, MA, 1–7.
[14] Javaraiah, V. (2011). Backup for cloud and disaster recovery for consumers
and SMBs. 2011 Fifth IEEE International Conference on Advanced
Telecommunication Systems and Networks (ANTS), 1–3.
[15] Alhazmi, O. H., & Malaiya, Y. K. (2013). Assessing Disaster Recovery
Alternatives: On-Site, Colocation or Cloud. Software Reliability
Engineering Workshops (ISSREW), 2012 IEEE 23rd International
Symposium on, 19–20.
[16] Pokharel, M, Seulki Lee, and Jong Sou Park. 2010. “Disaster Recovery for
System Architecture Using Cloud Computing.” Applications and the
Internet (SAINT), 2010 10th IEEE/IPSJ International Symposium on: 304–
7.
[17] Pokharel, M., Lee, S., & Park, J. S. (2010). Disaster Recovery for System
Architecture Using Cloud Computing. 2010 10th IEEE/IPSJ International
Symposium on Applications and the Internet, 304–307.
[18] Jian-Hua, Z., & Nan, Z. (2011). Cloud computing-based data storage and
disaster recovery. Proceedings of the 2011 International Conference on
Future Computer Science and Education, ICFCSE 2011, 629–632.
[19] Grolinger, K., Capretz, M. a M., Mezghani, E., & Exposito, E. (2013).
Knowledge as a service framework for disaster data management.
Proceedings of the Workshop on Enabling Technologies: Infrastructure for
Collaborative Enterprises, WETICE, 313–318.
[20] Togawa, S., & Kanenishi, K. (2013). Private Cloud Cooperation
Framework of E-Learning Environment for Disaster Recovery. 2013 IEEE
International Conference on Systems, Man, and Cybernetics, 4104–4109.
[21] Saquib, Z., Tyagi, V., Bokare, S., Dongawe, S., Dwivedi, M., & Dwivedi,
J. (2013). A new approach to disaster recovery as a service over cloud for
database system. 2013 15th International Conference on Advanced
Computing Technologies (ICACT), 1–6.
[22] Suguna, S., & Suhasini, A. (2014). Overview of data backup and disaster
recovery in cloud. International Conference on Information Communication
and Embedded Systems (ICICES2014), (978), 1–7.
[23] Sengupta, S., & Annervaz, K. M. (2014). Multi-site data distribution for
disaster recovery—A planning framework. Future Generation Computer
Systems, 41, 53–64.
[24] Chang, V. (2015). Towards a Big Data system disaster recovery in a Private
Cloud. Ad Hoc Networks, 35, 65–82.
[25] Sambrani, Y., & Rajashekarappa (2016). E fficient Data Backup Mechanism
for Cloud Computing, 5(7), 92–95.
[26] Prathyakshini, M., & Ankitha, K. (2016). Data Storage and Retrieval using
Multiple Cloud Interfaces, 5(4), 936–939.
... Protecting the integrity and privacy of cloud-based data services is crucial for cloud computing since cloud storage providers may be untrustworthy and the data they store is sensitive. Most modern firms use cloud computing to save money on infrastructure and take advantage of IT [9]. Recent advancements in cloud computing provide a low-cost, low-overhead replacement for conventional DR Plan (DRP)s, making them accessible even to small and medium-sized enterprises [10]. ...
Article
Full-text available
Incorporating cloud-based algorithms for disaster recovery (DR), it explores data replication, failover, virtual machine (VM) migration, and consistency algorithms. These algorithms play a pivotal role in safeguarding data and system continuity during unforeseen disruptions. Data replication ensures redundancy, failover algorithms swiftly transition to backup resources, VM migration facilitates resource optimization, and consistency algorithms maintain data integrity. Leveraging cloud technology enhances the effectiveness of these algorithms, providing robust DR solutions critical for business continuity in today's digital landscape. The recent growth in popularity of internet services on a massive scale has also raised the demand for stable underpinnings. Despite the fact that DR for big data is frequently overlooked in security research, the majority of existing approaches use a narrow, endpoint-centric approach. The significance of DR strategies has grown as cloud storage has become the norm for more data. But traditional cloud-centric DR techniques may be expensive, thus less expensive alternatives are being sought. There is persistent concern in the information technology (IT) community about whether or not cloud service providers (CPs) can guarantee data and service continuity in the event of a disaster.
... In this thorough analysis, we explore the crucial factors that businesses need to take into account when deciding between federated and hybrid cloud architectures. We examine the complex interactions among data governance, security standards, interoperability, and performance optimization, highlighting their subtleties [8]. ...
Article
Full-text available
The concept of several clouds has greatly extended the use of cloud computing and gained popularity in academic and business circles. The use of multi-cloud techniques has increased as businesses use cloud computing more and more to meet their computational demands. A thorough analysis of cloud architectures intended for distributed multi-cloud computing is presented in this study, with an emphasis on federated and hybrid cloud systems. The study looks at the opportunities and difficulties of adopting and overseeing a variety of cloud resources from several providers. The review starts out by going over the basic ideas and reasons for using multi-cloud strategies, emphasizing how important flexibility, scalability, and resilience are in contemporary computing settings. The study then explores the nuances of hybrid cloud architectures, with a focus on how private and public cloud resources can be seamlessly combined. In the context of hybrid cloud installations, important factors including data sovereignty, security, and workload orchestration are covered. In addition, the research delves into federated cloud architectures, clarifying how enterprises can coordinate and oversee workloads across several cloud providers. An examination of resource identification, policy enforcement, and interoperability procedures sheds light on the intricacies of federated cloud computing. The review delves into new developments in standards, best practices, and technology that help multi-cloud ecosystems mature. The study analyses the state of research and industry practices now, pointing out gaps and possible directions for future development. The intention is to provide decision-makers, researchers, and practitioners with a comprehensive grasp of the changing cloud architectural scene so they can plan and execute distributed multi-cloud solutions with knowledge. In conclusion, this article provides a thorough overview of hybrid and federated cloud architectures by combining information from many sources. Through a comprehensive analysis of the difficulties and possibilities associated with multi-cloud computing, the study hopes to add to the current conversation on cloud environment design and optimization in the rapidly changing technological landscape.
... All these solutions and many more are discussed in [e2] [p10] [p12] [e9] [e4], helping in terms of business continuity in different aspects. I.e., Block Replication will ensure to achieve zero RTO and negligible RTO [8] [9]; Local Backup will have minimal cost and ensure peace of mind [12]; Multi-Cloud environment will minimize the risk of availability failure, loss of data and privacy [13]; Hot Standby (Active/Active) is a synchronous real-time replication in based in database backup and ensures both RTO RPO to be zero, meaning 0 data loss [14]. ...
Article
Context: Digital data is being stored in large quantities in Cloud, requiring data backup and recovery services. Due to many factors such as disasters and other disruptive events, the risk of data loss is huge. Therefore, backup and data recovery are essential and effective in improvement of system availability and maintaining Business Continuity. Nevertheless, the process to achieve the goal of business uninterrupted faces many challenges regarding data security, integrity and failure prediction. Objective: This paper has the following goals: analyzing system- atically the current published research and presenting the most common factors leading to the need of Disaster Recovery and backup plan; investigating and identifying the adopted solutions and techniques to prevent data loss; and lastly, investigating the influence Data Recovery and Backup has in terms of business continuity and identifying the privacy and security issues regarding the disaster recovery process. Method: A systematic mapping study was conducted, in which 45 papers, dated from 2010 to 2020 were evaluated. Results: A set of 45 papers is selected from an initial search of 250 papers, including 10 papers from snowball sampling, following the references from some papers of interest. These results are categorized based on the relevant research questions, such as causes of disasters, data loss, business continuity, and security and privacy issues. Conclusion: An overview of the topic is presented by investigating and identifying the following features: challenges, issues, solutions, techniques, factors, and effects regarding the backup and recovery process.
... The primary concern for this document is finding ways to ensure that the process of data backup and recovery is effective in providing high data availability, flexibility, and reliability at a reasonable cost. As addressed by Alshammari et al (2017), the majority of providers of cloud services enable their customers to rapidly recover from disasters with a minimum of disruption via the use of a geographically distributed data backup and redundancy model. ...
Article
Full-text available
The state of cloud security is evolving. Many organizations are migrating their on-premises data centers to cloud networks at a rapid pace due to the benefits like cost-effectiveness, scalability, reliability, and flexibility. Yet, cloud environments also raise certain security concerns that may hinder their adoption. Cloud security threats may include data breaches/leaks, data loss, access management, insecure APIs, and misconfigured cloud storage. The security challenges associated with cloud computing have been widely studied in previous literature and different research groups. This paper conducted a systematic literature review and examined the research studies published between 2010 and 2023 within popular digital libraries. The paper then proposes a comprehensive Secure Cloud Migration Strategy (SCMS) that organizations can adopt to secure their cloud environment. The proposed SCMS consists of three main repeatable phases/processes, which are preparation; readiness and adoption; and testing. Among these phases, the author addresses tasks/projects from the different perspectives of the three cybersecurity teams, which are the blue team (defenders), the red team (attackers), and the yellow team (developers). This can be used by the Cloud Center of Excellence (CCoE) as a checklist that covers defending the cloud; attacking and abusing the cloud; and applying the security shift left concepts. In addition to that, the paper addresses the necessary cloud security documents/runbooks that should be developed and automated such as incident response runbook, disaster recovery planning, risk assessment methodology, and cloud security controls. Future research venues and open cloud security problems/issues were addressed throughout the paper. The ultimate goal is to support the development of a proper security system to an efficient cloud computing system to help harden organizations’ cloud infrastructures and increase the cloud security awareness level, which is significant to national security. Furthermore, practitioners and researchers can use the proposed solutions to replicate and/or extend the proposed work.
... The primary concern for this document is finding ways to ensure that the process of data backup and recovery is effective in providing high data availability, flexibility, and reliability at a reasonable cost. As addressed by Alshammari et al (2017), the majority of providers of cloud services enable their customers to rapidly recover from disasters with a minimum of disruption via the use of a geographically distributed data backup and redundancy model. ...
Conference Paper
Full-text available
Abstract: The state of cloud security is evolving. Many organizations are migrating their on-premises data centers to cloud networks at a rapid pace due to the benefits like cost-effectiveness, scalability, reliability, and flexibility. Yet, cloud environments also raise certain security concerns that may hinder their adoption. Cloud security threats may include data breaches/leaks, data loss, access management, insecure APIs, and misconfigured cloud storage. The security challenges associated with cloud computing have been widely studied in previous literature and different research groups. This paper conducted a systematic literature review and examined the research studies published between 2010 and 2023 within popular digital libraries. The paper then proposes a comprehensive Secure Cloud Migration Strategy (SCMS) that organizations can adopt to secure their cloud environment. The proposed SCMS consists of three main repeatable phases/processes, which are preparation; readiness and adoption; and testing. Among these phases, the author addresses tasks/projects from the different perspectives of the three cybersecurity teams, which are the blue team (defenders), the red team (attackers), and the yellow team (developers). This can be used by the Cloud Center of Excellence (CCoE) as a checklist that covers defending the cloud; attacking and abusing the cloud; and applying the security shift left concepts. In addition to that, the paper addresses the necessary cloud security documents/runbooks that should be developed and automated such as incident response runbook, disaster recovery planning, risk assessment methodology, and cloud security controls. Future research venues and open cloud security problems/issues were addressed throughout the paper. The ultimate goal is to support the development of a proper security system to an efficient cloud computing system to help harden organizations’ cloud infrastructures and increase the cloud security awareness level, which is significant to national security. Furthermore, practitioners and researchers can use the proposed solutions to replicate and/or extend the proposed work.
Conference Paper
In the wake of the COVID-19 outbreak, university operations were suspended, and online learning therefore was the best option to reduce the spread of the pandemic. To ensure performant, robust and accurate E-learning applications, intelligent algorithms such as Deep Learning (DL) and Machine Learning (ML) are essential, demanding significant resources. Research suggests distributing E-learning applications across Grid Computing, Peer-to-Peer (P2P) networks and Cloud Computing (CC) environments. Exam scenarios, requiring high availability, highlight the inadequacy of a single cloud. Moreover, diverse Internet of Things (IoT) devices used by E-learning users necessitate an adaptable infrastructure. The multicloud or Cloud Broker (CB) architecture is our suggested approach for the deployment of E-learning application to optimize the experience, overcoming time constraints and the challenge of managing numerous accounts. Experimental results validate that multicloud or CB architecture is an effective infrastructure for the development of potent E-learning tools and for boosting performance.
Article
Full-text available
The issue of defending industry information against harm brought on by natural catastrophes is exceedingly challenging. The practice of preserving information technology known as "cloud computing" makes it simple to store vast volumes of data. However, securing data against business issues brought on by natural catastrophes is a challenging undertaking. To advance cloud computing technology, a novel strategy must be employed. A new technology dubbed cloud volume on tap, a suggested business, allows for the fast storage of information. The effectiveness and productivity of cloud computing can also be increased. It is quite simple to exchange information. This may be changed to shield you against business issues brought on by natural disasters. The N-tier system underlies cloud computing. An N-tier design has several clients and servers.
Conference Paper
Full-text available
Data generated in electronic format are voluminous in amount, as large generated data can be flexibly stored and accessed when needed by the users. Cloud computing has become the boon and emerging latest distributed computing technology which provides number of on demand services to the cloud users. This survey paper will be focusing more on the factors like security and disaster recovery aspects. The algorithm which will be proposed has two main objectives firstly providing highest security to the cloud users and secondly recovering of the data during natural destruction , paper also focus much on efficiency, time consumption to recover the data , data integrity, cost etc. Few of the recent techniques used for backup and security will be introduced giving a brief description on each of them.
Article
Full-text available
Disaster Recovery (DR) plays a vital role in restoring the organization’s data in the case of emergency and hazardous accidents. While many papers in security focus on privacy and security technologies, few address the DR process, particularly for a Big Data system. However, all these studies that have investigated DR methods belong to the “single-basket” approach, which means there is only one destination from which to secure the restored data, and mostly use only one type of technology implementation. We propose a “multi-purpose” approach, which allows data to be restored to multiple sites with multiple methods to ensure the organization recovers a very high percentage of data close to 100%, with all sites in London, Southampton and Leeds data recovered. The traditional TCP/IP baseline, snapshot and replication are used with their system design and development explained. We compare performance between different approaches and multi-purpose approach stands out in the event of emergency. Data at all sites in London, Southampton and Leeds can be restored and updated simultaneously. Results show that optimize command can recover 1TB of data within 650 seconds and command for three sites can recover 1 TB of data within 1360 seconds. All data backup and recovery has failure rate of 1.6% and below. All the data centers should adopt multi-purpose approaches to ensure all the data in the Big Data system can be recovered and retrieved without experiencing a prolong downtime and complex recovery processes. We make recommendations for adopting “multi-purpose” approach for data centers, and demonstrate that 100% of data is fully recovered with low execution time at all sites during a hazardous event as described in the paper. (Due to the copyrights and other requirements, this paper will not be available until the full version is online on ScienceDirect. Sorry for any inconvenience caused)
Article
Full-text available
Cloud Computing offers resources as services that are dynamically provisioned over the internet. The security of cloud computing has always been an important aspect of the quality of service from cloud service providers. The main problem that the cloud computing paradigm implicitly contains is that of secure outsourcing of sensitive as well as business-critical data and processes. One central concern in cloud computing is privacy and integrity of data processes in cloud. By using two or more distinct clouds, risks such as manipulation of data, and other threats associated with process tampering can be reduced. By integrating distinct clouds, the trust assumption can be lowered. Therefore, to provide integrity and confidentiality, the application logic and the data logic is split into two distinct clouds so that no cloud provider will gain the complete knowledge of the user data. The administrator resides in a private cloud, allows only the authenticated users to access the cloud storage. The administrator performs encryption and segmentation of the data to provide data confidentiality.
Article
Full-text available
Business disruptions can take place everywhere, anytime. It is impossible to foresee what may hit and when. It has become compulsory for organizations to be organized for such disaster/recovery scenarios. With the ever increasing dependence on business processes for both electronic and traditional services, it has become almost mandatory for every organization to plan also for Business Continuity (BCP).
Conference Paper
Full-text available
Disaster recovery planning and securing business processes against outtakes have been essential parts of running a company for decades. As IT systems became more important, and especially since more and more revenue is generated over the Internet, securing the IT systems that support the business processes against outages is essential. Using fully operational standby sites with periodically updated standby systems is a well-known approach to prepare against disasters. Setting up and maintaining a second datacenter is, however, expensive. In this work, we present Cloud Standby, a warm standby approach for setting up and updating a standby system in the Cloud. We describe the architecture of Cloud Standby and its methods for deploying and updating the standby system. We show that by using Cloud Standby the recovery time and long-term costs of disaster recovery can significantly be reduced.
Conference Paper
In this paper, we present DDP-DR: a Data Distribution Planner for Disaster Recovery. DDP-DR provides an optimal way of backing-up critical business data into data centres (DCs) across several Geographic locations. DDP-DR provides a plan for replication of backup data across potentially large number of data centres so that (i) the client data is recoverable in the event of catastrophic failure at one or more data centres (disaster recovery) and, (ii) the client data is replicated and distributed in an optimal way taking into consideration major business criteria such as cost of storage, protection level against site failures, and other business and operational parameters like recovery point objective (RPO), and recovery time objective (RTO). The planner uses Erasure Coding (EC) to divide and codify data chunks into fragments and distribute the fragments across DR sites or storage zones so that failure of one or more site / zone can be tolerated and data can be regenerated.
Article
With the rapid popularity of cloud computing paradigm, disaster recovery using cloud resources becomes an attractive approach. This paper presents a practical multi-cloud based disaster recovery service model: DR-Cloud. With DR-Cloud, resources of multiple cloud service providers can be utilized cooperatively by the disaster recovery service provider. A simple and unified interface is exposed to the customers of DR-Cloud to adapt the heterogeneity of cloud service providers involved in the disaster recovery service, and the internal processes between clouds are invisible to the customers. DR-Cloud proposes multiple optimization scheduling strategies to balance the disaster recovery objectives, such as high data reliability, low backup cost, and short recovery time, which are also transparent to the customers. Different data scheduling strategies based on DR-Cloud are suitable for different kinds of data disaster recovery scenarios. Experimental results show that the DR-Cloud model can cooperate with cloud service providers with various parameters effectively, while its data scheduling strategies can achieve their optimization objectives efficiently and are widely applicable.
Article
In this paper, we present DDP-DR: a Data Distribution Planner for Disaster Recovery. DDP-DR provides an optimal way of backing-up critical business data into data centers (DCs) across several Geographic locations. DDP-DR provides a plan for replication of backup data across potentially large number of data centers so that (i) the client data is recoverable in the event of catastrophic failure at one or more data centers (disaster recovery) and, (ii) the client data is replicated and distributed in an optimal way taking into consideration major business criteria such as cost of storage, protection level against site failures, and other business and operational parameters like recovery point objective (RPO), and recovery time objective (RTO). The planner uses Erasure Coding (EC) to divide and codify data chunks into fragments and distribute the fragments across DR sites or storage zones so that failure of one or more site / zone can be tolerated and data can be regenerated. We describe data distribution planning approaches for both single customer and multiple customer scenarios.
Conference Paper
In this research, we have built a framework of disaster recovery such as against earthquake and tsunami disaster for e-Learning environment. We build a prototype system based on IaaS architecture, and this prototype system is constructed by several private cloud computing fabrics. These private cloud fabrics are constructed to operate one large private cloud fabric under the VPN connection. The distributed storage system builds on each private cloud fabric, that is handled almost like same block device such as one large file system. For LMS (Learning Management System) to work, we need to boot virtual machines. The virtual machines are booted from the virtual disk images that are stored into the distributed storage system. The distributed storage system will be able to keep running as one large file system when some private cloud fabric does not work by any troubles. We think that our inter-cloud framework can continue working for e-Learning environment under the post-disaster situation. In this paper, we show our inter-cloud cooperation framework and the experimental results on the prototype configuration.