ArticlePDF Available

Peer Clouds: A P2P-Based Resource Discovery Mechanism for the Intercloud

Authors:

Abstract and Figures

The Intercloud represents the next logical step in the evolution of cloud computing overcoming issues of data/vendor lock-in and dealing with volatile service requests. However, resource discovery across heterogeneous Cloud Service Providers (CSPs) remains a challenge. In this paper we present a P2P-based distributed resource discovery mechanism based on spatial-awareness of cloud data-centers belonging to different Cloud Service Providers. The scheme is based upon exploiting location information of Data Centers and organizing them into DHT peers for optimal communication. It thus allows for QoS-compliant resource/service provisioning across Cloud Service Providers (CSPs). Simulation results establish the e�effectiveness of the proposed scheme.
Content may be subject to copyright.
Peer Clouds: A P2P-Based Resource Discovery
Mechanism for the Intercloud
LOHIT KAPOOR1,2, SEEMA BAWA1, and ANKUR GUPTA2
1Thapar University,2Model Institute of Engineering and Technology
The Intercloud represents the next logical step in the evolution of cloud computing overcoming issues of data/vendor
lock-in and dealing with volatile service requests. However, resource discovery across heterogeneous Cloud Service
Providers (CSPs) remains a challenge. In this paper we present a P2P-based distributed resource discovery mecha-
nism based on spatial-awareness of cloud data-centers belonging to different Cloud Service Providers. The scheme
is based upon exploiting location information of Data Centers and organizing them into DHT peers for optimal
communication. It thus allows for QoS-compliant resource/service provisioning across Cloud Service Providers
(CSPs). Simulation results establish the effectiveness of the proposed scheme.
Keywords: Intercloud, Resource/Service Discovery
1. INTRODUCTION
Small or medium-sized Cloud Service Providers (CSPs) are usually limited in terms of serving
capability due to limited compute services in their data centers. The problem gets exacerbated
during peak hours when the demand is very high increasing the probability of non-servicing of
user service request. Due to the nature of cloud computing cloud vendors need to dynamically
provision resources from other vendors to create the illusion of “on-demand elasticity”. An
intercloud [Buyya et al. 2010] architecture connecting different cloud-service providers, therefore
becomes unavoidable in this context. An Intercloud system is a federated environment comprising
data centers belonging to different cloud vendors facilitating resource discovery and provisioning
based on well-defined economic principles. For small and medium CSPs anintercloud environment
allows dynamic scaling of resources reducing request drop and violation ServiceLevel Agreements
(SLA’s).
Resource discovery is a major challenge in the successful implementation of a federated in-
tercloud environment. Discovery and management of resources in an intercloud federation can
typically be done in a centralized or decentralized manner. Most of the existing techniques
[Buyya et al. 2010; Nikolay and Buyya 2012] for resource discovery and scheduling utilize a cen-
tralized mechanism. In this method each cloud interacts with a central entity or a meta-broker,
submits all the required information to it and then a meta-scheduler takes control to assign ser-
vices across CSPs to a job accordingly. However, a centralized approach to service management
and discovery does suffer from some obvious shortcomings like performance vs. scalability, se-
curity vulnerabilities and single-point-of-failure. Further, in thecentralized approach, intercloud
resource allocation requests are forwarded to the meta-brokerwhich then directs these jobs to
the local brokers at each CSP. In this case regular coordination between local brokers and the
meta-broker is required since local resource availability changes dynamically. The meta-broker
cannot make any presumptions based on the previous known state of the local services. Thus,
implementing a best-fit approach in this case requires collating real-time information from all
the participating local-brokers which can be challenging. Resource discovery in an intercloud
environment plays a critical role in order to implement a well coordinated federation of CSPs
to avoid user request drop and delayed request response. Moreover, resource information in a
Resources in an intercloud represent virtual machines, platforms, native and third-party services across all cloud
models - IaaS, PaaS and SaaS. Resource and service discovery is therefore used interchangeably throughout the
paper.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
154 ·LOHIT KAPOOR et al.
federated environment should be up to date and each CSP in the federation should be aware of
the status of the other CSPs. Due to the geographical distribution of the data centers belonging
to different CSPs communication latency can become a major performance bottleneck. Thus,
any efficient service discovery strategy for the intercloud environment should attempt to minimize
the communication latency by taking into account the geographical location of the data centers.
With centralized brokers and schedulers, it is not always possible to place them in close proximity
to all data centers. Thus, some CSPs end up paying a higher communication cost than others
each time resources from other CSPs are requisitioned.
2. RELEVANT WORK
Authors in [Buyya et al. 2010],[Nikolay and Buyya 2012]has presented several early works re-
lated to federated clouds with service discovery based on negotiation held in a centralized ex-
change. This market-based centralized model is prone to single point of failure besides present-
ing scalability issues. NWIRE (Net-Wide-Services) [Schwiegelshohn and Yahyapour 1999] is a
meta-computing scheduling architecture based on brokerage and trading and is a market system
between sub domains. Global Inter-Cloud Technology Forum (GICTF) [GICTF ] is an intercloud
forum where service discovery is based on collection of services, their selection by a central entity.
Another intercloud service discovery strategy based on clustering of services based on past service
experience is presented in [Sotiriadis et al. 2012].
However, the clustering scheme presented is based on putting together transient services and
suffers from overheads of creating and disbanding of the clusters. Moreover, keeping track of past
service experiences of each participant involves its own overheads. InterGrid [Huang et al. 2012] is
a cross-grid cooperation architecture composed of a set of InterGrid Gateways (IGGs) responsible
for managing peering arrangements between grids. The InterGrid Gateways employed upon
the top of each participating grid are distributed in a decentralized manner for efficient service
discovery. However, the framework provides no fault-tolerance mechanism for the IGGs, failure
of which can result in islands of grids being created resulting in a disconnected network. Authors
in [Gupta et al. 2011] suggested a completely decentralized peer-to-peer framework for dynamic
service provisioning across cloud service providers. However, the scheme is not optimized for
latency by considering the geographical location of the data centers. Bessis et al. [Sotiriadis et al.
2012] also presented meta-scheduling model in intercloud environment to engage in drawbacks
exist in centralized models. Along with it also undertake bottleneck in concurrent requests in
intercloud environment during peak hours. Nelson et al [Nelson and Uma 2012] present an Inter-
cloud Service Provisioning System (IRPS) in which each service and task represented semantically
using service ontology. Further they use present a set of inference rules for discovery and semantic
scheduler. Some instances of decentralized service discovery are available in grid computing. This
paper presents a peer-to-peer based decentralized and distributed service discovery and selection
mechanism for the intercloud environment. This proposed model ensures that communication
latency within the network of Data Centers is minimized and service requests are serviced by
data centers which are relatively closer to the requesting data center. The rest of the paper is
organized as follows: Section II presents a detailed discussion of the proposed system model. In
Section III the sequence of operations of the proposed framework are illustrated, while in Section
IV some early simulation results based on a custom simulator are presented. Finally, Section V
concludes the paper and presents some directions for future work.
3. SYSTEM MODEL
A Cloud Service Provider (CSP) consistsof multiple data-centers located in different geographic
locations across globe. A central broker manages the service requests from users within the CSP.
It is assumed that each CSP under consideration participates in a federation of CSPs. In the
model, each data-center of a CSP has a Resource Manager (RM) for maintenance of internal
services of a data-center and a Remote Resource Manager (RRM) which keeps track of resource
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
Peer Clouds: A P2P-Based Resource Discovery Mechanism for the Intercloud ·155
information from other participating data centers. The RRMs belonging to a particular geograph-
ical location are organized into the different Local Groups (LG). One RRM in each group assumes
responsibility for acquiring all the required resource information from other peer RRMs located
in the respective LG through resource availability advertisements.A virtual network overlay of
allsuch RRMs is created to facilitate exchange of resource information. This virtual network
overlay is called the Super Group (SG). Figure 1 provides a schematic of the proposed scheme in
which different datacenter belonging to different CSPs forms different local groups with a chosen
RRM from each LG participating in the global Super Group.
Figure. 1: Schematic view of Resource discovery in intercloud
Let RRMi (i=1, 2, 3?, M) be the set of M Remote Resource Managers (RRMs), representing
individual data centers in the federated intercloud environment. Each RRMi belongs to a LG
comprising M data centers, which may be designated by DCi1, DCi2???DCir?. DCiM. Theoret-
ically Mcan vary as data centers join and leave the P2P network, but for simplicity we assume
that data centers continue to be a part of the federation even if they have no services to offer or
are not actively seeking services. Thus,
kLGik=kRRMir k=kDCir k
where i > 0, i = 1,2?., M and i6r6M
Each data center within the federation puts out a Resource Availability (RA) status periodically
in the form of advertisements. The RA is typically expressed in terms of Resources (RES) and
their associated cost (C), where each resource can be a virtual machine, platform or service.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
156 ·LOHIT KAPOOR et al.
Thus, RA is the set of resource, cost tuples advertised by each RRM within the LG and cached
by the super RRM which participates in the SG.
RA = (RES1, C1),(RES2, C2).??.(RE SN, CN)
Where X is the number of resources offered for remote use by a particular data center at
a particular time. X varies based on the resource demand at the data center. Thus, other
RRMs need to cache only the last RA’s issued by RRMs of other data centers since it accurately
represents the state of available services. Moreover, the cost associated with the resources is also
a part of the RA.
Other data centers which are desirous of availing services within the federation put out a
Resource Request (RR) advertisement which is again expressed in terms of required RES and
desired cost.
RR = (RES1, C1),(RES2, C2).??.(RE SK, CK)
where K is the number of resources required by the requesting RRM. The objective for the
requesting RRM is to locate another RRM such that
RRK RAX
where K¡=X, so that the number of resources available at the prospective partner RRM is
more than or equal to the number of resources requested. The RR from a particular RRM is first
attempted to be serviced within the LG. Each RRM already has the cached RA advertisements
from other RRMs within the LG. If the resource availability within the LG is not met, the
requesting RRM sends a ”Remote Resource Request” (RRR) to SG. If the resources requested
in the RRR are available at a particular RRM, the RRM sends the details of the RMs to the
requesting RRM. If none of the RRMs within an LG meet the requested services, the RRR is
propagated further within the SG until the request is met or all options are exhausted.
An obvious challenge in all resource discovery strategies is to manage the trade-off between
resource cost and latency. The cheapest resources could be located the farthest and latency adds
its own costs in terms of data transfer costs and communication overheads. This choice needs to
be made by the requesting RRM. For instance, a high-priority user service request with stated
SLAs may be serviced by choosing a RRM with the lowest latency i.e. closest to the requesting
RRM which also meets the cost criteria.On the other hand a low priority user request may be
serviced by a best-fit approach in which cost may be given more weight over latency to maximize
the profit of the requesting RRM. The RR or the RRR requests issued can reflect the relevant
priorityof cost or latency. Further, to handle scenarios where an RRM may not want its data
to be processed at a particular geographical location, a conditional RRR can be issued which
prevents the query being forwarded to the excluded locations.
4. SEQUENCE OF OPERATIONS
4.1 Joining Process for new RRM
If the RRM is first in the network, it assumes the role of the Super RRM. Since RRMs represent
data centers, they can be assumed to be available at all times and hence usual mechanisms
of having seed peers or landmark peers to assist in the peer join process are not needed. For
subsequent joins of RRM’s the request is responded by the super RRM. With the increase in the
size of LG, requests get cached on all intermediary RRMs that they pass through. ThusRRM
join timesare subsequently lowered. The newly entered RRM is now capable to receive RA and
sent RR advertisement from/to other RRMs. RRMs joining the LG in Peer Clouds specified in
Algorithm 1.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
Peer Clouds: A P2P-Based Resource Discovery Mechanism for the Intercloud ·157
Algorithm 1 RRMs joining the LG in Peer Clouds
1: for all RRM Peer Cloud do
2: if RRM does not LGi then
3: locateSuperRRM(myRRMID, regionID);
4: if !superRRM then
5: newSuperRRMID = becomeSuperRRM(myRRMID);
6: LG = createLG(myRegionID);
7: joinSG(newSuperRRMID, regionID);
8: else
9: registerWithSuperRRM(myRRMID);
10: joinLG (LG);
11: end if
12: end if
13: end for
4.2 Super RRM Selection
Selection of RRM as a Super RRM is done on first come first serve basis i.e. the first RRM
to join a LG nominates itself as SuperRRM for a particular region. Subsequent RRMs retain
their joining rank in the LG. The Super RRM acts as a Gateway to the SG by collating resource
advertisements from other RRMs within the LG and sharing it within the group of super RRMs.
In the unlikely event that the Super RRM fails, the next ranking RRM takes over as the Super
RRM. This process is initiated if the Super RRM does not send out a special status message
during a designated time period. Each RRM continuously generates an RA (resource availability)
status message 5 minutes which holds the current status of resources and their associated cost
and circulates it within the LG. Each RA message has a time-to-live parameter associated with it
to ensure that older messages do not remain in circulation. The RA status messages are cached
by other RRMs in the LG and used to initiate a contract agreement with them based on future
service.
4.3 Resource Discovery
The process of resource discovery is coveredby two types of constraints a) costor b) resource
specification which are part of the resource request advertisements.Specific requests which are
not serviceable within the LG due to lack of resources or not meeting cost constraints are then
put out in the SG for possible resource provisioning. The SuperRRM propagates the request to
other SuperRRMs in the SG which propagate the requests further within their respective LGs.
RRMs which fulfill the resource criteria specified in the advertisement contact the advertising
RRM directly. The algorithm for resource provisioning is illustrated below (Algorithm 2) while
a sample resource advertisement is depicted in Figure 2. Selection of resources by any RRM can
be performed on the basis of ”latency” (proximity)or ”cost” or both.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
158 ·LOHIT KAPOOR et al.
Algorithm 2 Algorithm for resource lookup and provisioning in PeerClouds at each RRM.
1: advertiseResources(resourceVector, costVector) // RA
2: advertiseRequirements(resourceVector, constraintsVector) //RR
3: processResponse (responseVector)
4: for all each response in responseVector do
5: rankResponse(response)
6: selectedRRM = getTopRRM()
7: sendCon
rmation(selectedRRM)
8: processRequest (request)
9: if evaluateRequest(request) then
10: sendConfirmation(request.getRRM())
11: end if
12: end for
<RRM:Resource Request Advertisement>
<Resource Description=”Resource Description for Individual RRM” >
<RRM ID type=”UUID” description =”RRM’s ID”/>
<Resource type =”String” description =Virtual Machine/Service/>
<Resource quantity = ”Uint32” description = ”Number of VMs”>
<Bandwidth Type =”String” description =”Minimum bandwidth required”/>
<Platform type=”String” description =”Specific operating system/platform required”/>
<VM Config>
<CPU type =”Uint32” description =”Number of cores”/>
<Storage type =”Uint32” description = ”Hard disk space ”/>
<Memory type = ”Uint32” description = ”Minimum RAM”>
</VM Config>
<Constraints>
<Cost type = ”double” description = ”cost constraint for resource/hour”/>
<Cost Weight type = ”double” description = ”weight ”/>
<Latency type=”double” description = ”desired latency”/>
<Latency Weight type=”double” description = ”desired weight”/>
</Constraints>
<Service Description = ”Service Description for individual RRM”>
<Service name=”String” description = ”Service ID”>
<Service instances=”Uint32” description = ”Number of instances required”>
<Platform type=”String” description =”Specific operating system/platform required”/>
</Service Description >
</Resource Description>
</RRM:Resource Request Advertisement>
Figure 2: Sample Resource Request Advertisement
5. EXPERIMENTAL SETUP AND RESULTS
To evaluate the effectiveness of scheme 30 physical machines each with configuration shown in
Table I are deployed. Devstack [OPENSTACK ] is used to create a local cloud which provides an
option to install and run Openstack (software to control the cloud) on local systems. It enables
user to create, control and destroy virtual machines. A number of 150 virtual machines with
configuration as shown in Table II are created.
For peer to peer deployment, we also implemented the JXTA [JXTA ] java based protocol for
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
Peer Clouds: A P2P-Based Resource Discovery Mechanism for the Intercloud ·159
Table I: Physical Machine configuration
OS CPU HDD Memory(MB)
Ubuntu 14.04 (Trusty) Intel Core i7-2600 @3.4 GHhz 500 GB SATA 4GB
Table II: Virtual Machine configuration
OS CPU Cores Memory(MB)
Windows Server 2012 Intel Xeon E52670 1 1024
creation and maintenance of our P2P network. JXTA utilizes the Distributed Hash Table (DHT)
for organizing the P2P overlay as a hierarchical topology. However, it relies on rendezvous peers
to maintain and distribute routing indices for normal peers and the resources/services that they
provide. Queries are forwarded to rendezvous peers to locate the actual peer on which the desired
resource/service resides. The reason for using JXTA is: a) Supports Interoperability required in
intercloud b) Platform and Language independence for heterogeneous environment in intercloud
c) Ubiquity (any virtual machine can be a peer) d) Open standards (XML) for advertisement
and communications
Each VM constitutes a JXTA peer which depicts a RRM corresponding to each datacenter.
Therefore a P2P network of participating RRMs is created. We have used real world network
latency measurements by [NetworkDelay ]. These latency measurements are utilized for the
optimized LG construction. Inter-continental network latency measurements were also used to
model communication delays within the SG. Cloudsim 3.0.1 is used to generate the workload in
the form of cloudlets for each VM. These cloudlets are then converted in the form of resource
queries for each RRM under following parameters (Table 3):
In the first experiment we measured the startup time for 10 to 50 participating RRMs with one
designated SuperRRM in a Local Group. The aim of the experiment is to observe the cumulative
time for the initial configuration and organization of a Local Group. It is clear from Figure 3
that as the number of participating RRMs increases the overall startup time per RRM reduces
from 9.3 seconds/RRM (for 10 RRMs) to 8.2 seconds/RRM (for 50 RRMs). This is due to the
impact of super RRM startup time and resource aggregation on the overall time gets averaged
out. The startup time includes the JXTA initialization time per peer/RRM as well.
Figure 3: Startup time with varying number of RRMs
A variety of timing measurements for two different types of operations and resource discovery
queries within the test setup were obtained for varying topology sizes.
Figure 4 provides the time taken for a new RRM to join the existing setup. Average time
ranges from 770 to 860 ms for topologies with 10 to 50 RRMs within a LG. The join process for
a new RRM comprises initialization time plus JXTA peer join time plus the time taken for the
RRM to connect with the Super RRM.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
160 ·LOHIT KAPOOR et al.
Figure 4: Average join time for a new RRM as a function of LG size
To evaluate the performance of resource queries following parameters (Table 3) were used:
Table III: Cloudlets/Queries parameters
Parameters Name Ranges
cloudletLength
(the length or size (in mips) of this cloudlet to be executed per VM) (1000 to 5000 mips)
pesNumber
(CPU cores per VM) 1
Resource request frequency
(The number of requests per unit time) 2 5 per minute
Duration of resource usage
(Time to hold a resources) 30 60 minutes
Flash-crowd scenario frequency
(Peak hour time) once every 3 hours
Flash-crowd scenario duration
(Time duration of peak hour) 10 minutes
Flash-crowd resource request frequency
(The number of requests per unit time) 15 20 per minute
resCost
(Cost requested per resource) 0.200.40 per hour
In Figure 5 to 6 we present the Request Service Rate (RSR) and Response Time (RT) within an
LG for varying number of RRMs. We observe that the RSRremains linear with varying number
of queries. The size of the LG has a direct bearing on RSR. Thus, a larger size of LG results in
lower number of resource queries being forwarded to the SG.
Figure 5. Request Service Rate
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
Peer Clouds: A P2P-Based Resource Discovery Mechanism for the Intercloud ·161
Figure 6. Response Time
In the coming experiments we evaluated resources query responses from LG and SG under
following preferences set by resource query generator/user:
a) Latency based resource query (LRQ): In this type of resource query there is an attempt to
look out for resources which fall under pre-defined latency.
b) Cost based resource query (CRQ): In this type of resource query there is an attempt to look
out for resources which falls under pre-defined cost.
c) Hybrid resource query (HRQ): It attempts to find resources which fall under the response time
while maintaining the requested costs.
For LRQ, about 7 % of the queries were serviced by the SG and 93% of the queries were ser-
viced by the LG. Further there is an average increase of 4 1%in response time when the responses
come from SG as compared to LG owing primarily to communication delays shown in Figure 7.
For CRQ, about 43% of the queries were serviced by the SG and 57% of the queries were serviced
Figure 7: Average resource query response time for varying number of queries (LRQ)
by the LG. As shown in the Figure 8, the queries serviced by SG suffers very high overhead
(communication delay), resulting in high response time. However for HRQ as shown in Figure 9,
93% of queries were serviced within LG and 7% from SG and the resultant response time remains
marginal high to LRQ and below CRQ. In Figure 10, a complete 24 hrs result is displayed where
we can observe that during flash crowd scenario (i.e. after every 3 hrs) CRQ responded in lowest
time followed by HRQ and then LRQ. This is due to the reason that in CRQ, 43% of requests
are serviced by SG which hold sufficient resources for the requests, while in the case of LRQ 93%
requests are serviced in LG which are insufficient during peak hours resulting in high waiting
time for the requests. However in normal conditions LRQ serviced the requests in lowest time if
compare to CRQ and HRQ.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
162 ·LOHIT KAPOOR et al.
Figure 8: Average resource query response time for varying number of queries (CRQ)
Figure 9. Average resource query response time for varying number of queries (HRQ)
Figure 10. Comparative view of CRQ, LRQ and HRQ
6. CONCLUSION AND FUTURE DIRECTIONS
This paper presents an intercloud service discovery mechanism which consists of two levels of
groups local and global, inter-connected to each other. The application of P2P strategies for
service discovery in the intercloud environment has not been explored before. The use of JXTA
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
Peer Clouds: A P2P-Based Resource Discovery Mechanism for the Intercloud ·163
based implementation provides some inherent benefits such minimized response time which are
suited to the intercloud environment. Future work shall involve incorporating elements of qual-
ity of service parameters (like availability, reputation etc.) for service discovery and selection
mechanism allowing for greater QoS to be leveraged by participating data centers.
REFERENCES
Buyya, R.,Ranjan, R.,and Calheiros, R. 2010. Intercloud: Scaling of applications across multiple cloud
computing environments pp 13-31. In ICA3PP 2010. 10th International Conference on Algorithms and Archi-
tectures for Parallel Processing.
GICTF. Gictf white paper.
Gupta, A.,Kapoor, L.,and Wattal, M. 2011. Cloud-to-cloud (c2c): An ecosystem of cloud service providers
for dynamic resource provisioning. In CCIS 190. Springer, 501–510.
Huang, Y.,Bessis, N.,Norrington, P.,Kuonen, P.,and Hirsbrunner, B. 2012. Exploring decentralized
dynamic scheduling for grids and clouds using the community-aware scheduling algorithm. In Future Generation
Computer Systems. Elsevier, 402–415.
JXTA.
Nelson, V. and Uma, V. 2012. Semantic based resource provisioning and scheduling in inter-cloud environment.
In ICRTIT. IEEE, 250–254.
NetworkDelay.
Nikolay, N. and Buyya, R. 2012. Inter-cloud architectures and application brokering: taxonomy and survey. In
John Wiley and Sons.
OPENSTACK.
Schwiegelshohn, U. and Yahyapour, R. 1999. In High-Performance Computing and Networking. Volume 1593
of the series Lecture Notes in Computer Science. Springer-Verlag, 851–860.
Sotiriadis, S.,Bessis, N.,and Antonpoulos, N. 2012. Decentralized meta-brokers for inter-cloud: Modeling
brokering coordinators for interoperable resource management. In FSKD. IEEE, 2462–2468.
Sotiriadis, S.,Bessis, N.,and Kuonen, P. 2012. Advancing inter-cloud resource discovery based on past service
experiences of transient resource clustering. In Third International Conference on Emerging Intelligent Data
and Web Technologies (EIDWT). IEEE, 38–45.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
164 ·LOHIT KAPOOR et al.
Lohit Kapoor is a Research Scholar in Computer Science and Engineering Department
at Thapar University, Patiala, India under the guidance of Dr. Seema Bawa and Dr.
Ankur Gupta. He has presented and published a number of papers in various reputed
International Conferences (IEEE, Springer, ACM etc). His area of interest includes Cloud
Computing and Distributed Systems.
Dr. Seema Bawa holds M.Tech (Computer Science) degree form IIT Khargpur and
Ph.D. from Thapar Institute of Engineering & Technology, Patiala. She is currently
Professor, Computer Science and Engineering and Dean (Student Affairs) at Thapar Uni-
versity, Patiala since September 2010. As Dean (Student Affairs) she has made students
excel in diverse skills and areas with determination and conviction. She has demon-
strated wonderful managerial skills by heading the department for more than six years.
The department has grown in all dimensions including academics, research, man power
development and finances. Her areas of research interests include Parallel, Distributed
Grid and Cloud Computing, VLSI Testing, Energy aware computing and Cultural Com-
puting. Dr. Bawa has rich teaching, research and industry experience. She has worked
as Software Engineer, Project Leader and Project Manager, in software industry for more
than five years before joining Thapar University. She has been Coordinator of two na-
tional level research & development projects sp onsored by Ministry of Information and
Communication Technology. She is the author/co-author 111 research publications in technical journals and con-
ferences of international repute. She has served as Advisor / Track chair for various national and international
conferences. She has supervised eight Ph.D. and forty four M.E theses so far. Prof. Bawa is an active member of
IEEE, ACM, Computer Society of India, and VLSI Society of India. She has been rendering her services across the
globe as an editor and reviewer of various reputed journals of these societies.
Dr. Ankur Gupta is the Joint Director at the Model Institute of Engineering and Tech-
nology, Jammu, India, besides being a Professor in the Department of Computer Science
and Engineering. Prior to joining academia, he worked as a Technical Team Lead at
Hewlett Packard, developing software in the network management and e-Commerce do-
mains. He obtained B.E (Hons) Computer Science and MS Software Systems degrees from
BITS, Pilani and his PhD from the National Institute of Technology in India. His main ar-
eas of interest include peer-to-peer networks, network management, software engineering
and cloud computing. He has published over 40 peer-reviewed papers in reputed interna-
tional journals and conferences and is a recipient of the AICTE’s (All India Council for
Technical Education) Career Award. He has filed 10 patents in diverse technical domains
and is the founding managing editor of the International Journal of Next-Generation Com-
puting (IJNGC). He is a senior member of both the IEEE and ACM and a life member
of the Computer Society of India.
International Journal of Next-Generation Computing, Vol. 6, No. 3, November 2015.
... Among the many technological choices available to geneticists and plant breeders, heterosis propagating in crops has been the most victorious. Heterosis has been shown to boost yield and other beneficial features, making it a feasible option for improving Indian mustard production [22]. To successfully exploit heterosis, one must identify hybrids that outperform the parent or standard control cultivar. ...
... Several studies have indicated substantial heterosis accounting for about 13-91% in B. juncea [24][25][26]. Moreover, hybrids among genetically distant collections showed more heterosis than hybrids within groups [22]. Within this context, the current research sought to identify the best-performing parents and hybrids based on their general and specific ability to combine, as well as to evaluate the level of heterosis for yield and associated traits, such as resistance to mustard aphids, in seven lines and five testers. ...
Article
Full-text available
After soybean and palm oil, the world's third largest oil seed crop is rapeseed mustard. Out of the seven consumable oil seed crops grown in India, rapeseed mustard is responsible for one-third of production. Mustard aphid (Lipaphis erysimi Kalt) is considered the primary pest causing mayhem in crop production. Understanding the genetics behind resistance will aid breeders in developing a resistant/tolerant strain. Appropriate parent selection and analyzing gene action contribute to economic benefit maximization. e aims of the current study are to use seven lines and five testers to determine the best-performing parents and crosses based on their general and specific capacity to combine and to examine the level of heterosis for yield and related features like mustard aphid resistance. Due to the self-pollination nature of Indian mustard, Kempthorne's line X tester method is helpful to judge the combining ability. erefore, seven lines and five testers of Indian mustard (Brassica juncea L.) were employed in the present study. e findings suggested that there was substantial genetic variation for all traits examined. e mean oil yield of R1 B2-26 × R1 B2-25, JD6 × R1 B2-25, and JD6 × R1 B2-29 hybrids was more significant than that of the ancestors. e results show that R1 B2-26 × R1 B2-25, JD6 × R1 B2-25, and JD6 × R1 B2-29 hybrids produced more oil than their parents. e variance explained by SCA was greater than that explained by GCA, as indicated by the 2GCA/2SCA ratio being less than one for all characters, implying that nonadditive gene actions such as dominance, epistasis, and other interaction effects played an important role in the presence of these attributes. Punjab Local was discovered to be an excellent general merge for reducing crop duration, whereas JD6 was an excellent combiner for seed yield per crop, aphid infestation indicator, seed yield per crop, and oil production per plant. e predictability ratio was found to be less than 0.5 for almost all traits, denoting that the nonadditive gene measure is involved in controlling the nature except days to 50% flowering, aphid infestation index, oil content, and oil yield per plant. us, based on these four traits, selection for superior plants may be practiced in later generations.
... e objective function aims to minimize the delay produced in each simulation round for every participating node in the list. e CH selection process utilizes the following equation in order to determine the total number of CHs required for the supplied number of nodes and area [24]: ...
Article
Full-text available
With the technological advancements, practical challenges of establishing long-distance communication should be addressed using hop-oriented routing networks. However, long-distance data transmissions usually deteriorate the quality of service (QoS) especially in terms of considerable communication delay. Therefore, in the presented work, a reward-based routing mechanism is proposed that aims at minimizing the overall delay which is evaluated under various scenarios. The routing process involved a refined CH selection mechanism based on a mathematical model until a threshold simulation is not attained. The illustrations for the coverage calculations of CH in the route discovery are also provided for possible routes between the source and the destination to deliver quality service. Based on this information, the data gathered from the past simulations is passed to the learning mechanism using the Q-learning model. The work is evaluated in terms of throughput, PDR, and first dead node in order to achieve minimal transmission delay. Furthermore, area variation is also involved to investigate the effect of an increase in the deployment area and number of nodes on a Q-learning-based mechanism aimed to minimize the delay. The comparative analysis against four existing studies justifies the success of the proposed mechanism in terms of throughput, first dead node, and delay analysis.
... After the rows are normalized, the association matrix A can be regarded as a fuzzy relationship matrix of clustering and labeling. (3) According to the fuzzy logic reasoning mechanism, the fuzzy synthesis operation is performed on the association matrix and the membership degree, and the membership degree of the sample to the label is obtained [31]. After normalization, it is the label distribution. ...
Article
Full-text available
Autism Spectrum Disorder (ASD) is a complicated collection of neurodevelopmental illnesses characterized by a variety of developmental defects. It is a binary classification system that cannot cope with reality. Furthermore, ASD, data label noise, high dimension, and data distribution imbalance have all hampered the existing classification algorithms. As a result, a new ASD was proposed. This strategy employs label distribution learning (LDL) to deal with label noise and uses support vector regression (SVR) to deal with sample imbalance. The experimental results show that the proposed method balances the effects of majority and minority classes on outcomes. It can effectively deal with imbalanced data in ASD diagnosis, and it can help with ASD diagnosis. This study presents a cost-sensitive approach to correct sample imbalance and uses a support vector regression (SVR)-based method to remove label noise. The label distribution learning approach overcomes high-dimensional feature classification issues by mapping samples to the feature space and then diagnosing multiclass ASD. This technique outperforms previous methods in terms of classification performance and accuracy, as well as resolving the issue of unbalanced data in ASD diagnosis.
... e successful execution of the Bitcoin application data format has given rise to a new concept for software development based on Blockchain technology. A Blockchain data structure can be compared to a linked list that is shared across the network, with each node having a copy of every block (linked to the longest chain) generated by its generator [11]. In domains such as the IoT, e-governance, and e-document handling, many authentic solutions have lately been developed. ...
Article
Full-text available
e growing need for access to safer food items is increasing, and hence, there is a need for a better supply chain management system in the food industry is increasing. e increased complexity of the existing systems tends to introduce more issues to the stakeholders, and also, the cost of product traceability is quite high. Hence, the industry is looking for effective solutions in relation to drug traceability, and the application of Blockchain technology enables the stakeholders in the food and beverage (F&B) sector to track the movement of goods, supported in gathering the required details so that the contaminated products can be identified and recalled without much delay and lesser recall costs to protect the lives of the individuals. e tampered food items are increasing and are impacting the supply chain process, brand name of the companies, and claim assurance. ey create an adverse impact on the health of the individuals and cause higher economic loss to the health-care industry. e existing studies tend to focus on laying emphasis of the need for an enhanced, effective, and end tracking systems in the industry. e emergence of Blockchain technology enables centralized tracking of information support in enhancing the data privacy and increasing transparency and support in eradicating the tampered food products in the supply chain system. ese approaches leverage the usage of smart contracts and decentralize the storage of information in a secure manner for enhanced product traceability in the F&B industry. e implementation of smart contracts generates better data governance, which tends to meet the needs and requirements of the stakeholders, and applies effective measures of food traceability. e primary objective of the study is to perform an analysis of Blockchain in enhancing drug traceability in the food sector. e researcher uses quantitative analysis for the study as it helps in understanding the critical determinants influencing drug traceability in food effectively, the survey method is used to gather the information, and past reviews are also used to possess a better understanding of the subject area effectively.
... To ensure the comparability of experimental results, this article selects Inception Score [35] (IS) and Frechet Inception Distance (FID) [36] for comparison. IS indicator is especially proposed by StackGAN for CUB with a complete set of evaluation algorithms (https:// github.com/hanzhanggit/StackGAN ...
Article
Full-text available
Image synthesis based on natural language description has become a research hotspot in edge computing in artificial intelligence. With the help of generative adversarial edge computing networks, the field has made great strides in high-resolution image synthesis. However, there are still some defects in the authenticity of synthetic single-target images. For example, there will be abnormal situations such as “multiple heads” and “multiple mouths” when synthesizing bird graphics. Aiming at such problems, a text generation single-target model SA-AttnGAN based on a self-attention mechanism is proposed. SA-AttnGAN (Attentional Generative Adversarial Network) refines text features into word features and sentence features to improve the semantic alignment of text and images; in the initialization stage of AttnGAN, the self-attention mechanism is used to improve the stability of the text-generated image model; the multistage GAN network is used to superimpose, finally synthesizing high-resolution images. Experimental data show that SA-AttnGAN outperforms other comparable models in terms of Inception Score and Frechet Inception Distance; synthetic image analysis shows that this model can learn background and colour information and correctly capture bird heads and mouths. The structural information of other components is improved, and the AttnGAN model generates incorrect images such as “multiple heads” and “multiple mouths.” Furthermore, SA-AttnGAN is successfully applied to description-based clothing image synthesis with good generalization ability.
... During the current decade, the latest digital technologies such as machine learning in healthcare have been increasingly used to support and perform various functions with little or no human intervention. e use of machine learning tools in the field of orthopedics is becoming more and more popular as they can help with more complete problem analysis, provide accurate data, help predict quality, provide important information for a physician's quick decisions, and improve the provision of health and dietary services [22]. e use of ML is used in various health areas such as heart disease, neurology, brain diseases, orthopedics, radiology, and other areas. ...
Article
Full-text available
The current decade has seen an increased usage of high-end digital technologies like machine learning in the field of health care services which enable in supporting and performing different functions with less or no human interventions. The application of machine learning tools in the orthopedic area is gaining more popularity as it can support in analyzing the issues in a more comprehensive manner, provide accurate data, support in forecasting the pattern. It enables offering critical information for taking quick decisions by the medical practitioners in order to enhance the health and dietary care service delivery. The ML tools can support in collecting patient centric data related to orthopedic surgery and also estimate the postoperative complications, level of treatment modalities to be provided, and guide the medical practitioners in taking effective clinical device decisions. The ML approach also supports in providing prediction methods of implementing the ortho surgical outcomes. Furthermore, it can also guide in making better treatment procedures, forecast the patterns, and stream the health care management services for better patient recovery. This study implements a quantitative research approach which will support in sourcing the data from the respondents who are currently working as medical practitioners, orthopedic experts, and radiologists who use ML-based models in making critical decisions related to orthopedic surgery. The researchers chose nearly 149 respondents, and the information was analysed using the IBM SPSS package for gaining critical interpretation. The major analyses cover descriptive analysis, regression analysis, and analysis of variances.
... Deep learning plays a pivotal role in the development of wireless communication. In-depth learning methods such as neural cohesion networks and other related networks are also utilized to improve a wireless communication network's overall performance, productivity, and service delivery [19,20]. When compared to traditional models, learningbased methods leverage a large amount of data to improve overall productivity and performance. ...
Article
Full-text available
The implementation of accurate models to improve access technologies, communication transmission, and network slicing is anticipated to play a big part in the edge computing approach, as the demands and needs of individuals are quickly evolving. Deep learning models have tended to deliver more benefits in a wide range of applications; it also assists data providers in demonstrating considerable improvements in tackling complex real-world challenges. While the integration of connected wireless networks and deep learning is still in its infancy, wireless communication networks are increasingly focused on sophisticated technologies to meet the current and future needs of end users. Based on the theoretical and practical aspect that ranges from the basic aspect to future applications of wireless communication, this study is intended in addressing the opportunities of adopting metaheuristic deep learning-driven wireless communication. The researchers intend to apply descriptive design to the study as this enables in understanding the critical aspects of the study in an elaborate manner. The authors use both a primary data sources and a secondary data sources for performing the analysis. The secondary data source is sourced to understand the application of deep learning in the wireless communication area, and primary research is used to gather the data from the respondents to test the hypothesis and provide conclusions based on the analyses. The scope of this work is to utilize a quantitative model to undertake an analysis on evaluating the key parameters in the application of deep learning in wireless communication, which will allow for critical analysis and interpretation based on the findings.
... Therefore, the collaborative cloud has been introduced. In the collaborative cloud, the concern application is split and it is distributed among diverse cloud service providers [30] [31] [32] [33]. The interconnection of the physical resources between diverse clouds is accomplished via the CCC [14] [15]. ...
Article
The cloud computing is arising as a popular computing paradigm, as it is good in offering its users an on-demand scalable resource based services over the internet. In the peak hours, a single cloud is not at all efficient in serving an application; therefore the collaborative cloud model has been introduced. The collaborative cloud computing (CCC) make use of the globally-scattered distributed cloud resources of the diverse organizations collectively in a co-operative manner to provide the required service to the user. The allocation as well as the management of the resources is being a challenging task in the CCC, due to the heterogeneity of the resources. On the other hand, the assurance of the Quality of Service (QoS) and reliability of these resources is challenging. Further, it would be efficient if the resources are provided based on the system behavior. In this research work, a novel trust computing model is developed, which predicts both the QoS and Trust via analyzing the system behavior. The proposed model encloses three major phases: trust- QoS behavior estimation, resource matching and resource allocation. Initially, the QoS as well as Trust behavior of the system is estimated via a Neural Network (NN) model. Subsequently, the resource allocation is performed using the parallel resource matching framework, which is based on the concept of Map-Reduce. More particularly, the precious resource allocation is achieved by an optimization logic called Improved Grey Wolf Optimizer (IGWO). Here, the improvement of GWO emphasis the consideration of both the best and worst fitness.
... Service discovery process in this model performs through decentralised instances available in grid. Kapoor et al. (2015) proposed the model of discovery in interconnected cloud by interaction of the different modules. Remote resource manager (RM) for internal management of the native datacentre, remote resource manager (RRM) maintains the statistics of remote datacentres. ...
... Service discovery process in this model performs through decentralised instances available in grid. Kapoor et al. (2015) proposed the model of discovery in interconnected cloud by interaction of the different modules. Remote resource manager (RM) for internal management of the native datacentre, remote resource manager (RRM) maintains the statistics of remote datacentres. ...
Conference Paper
Full-text available
In this paper we present NWIRE, a new management architecture for metacomputing systems. After the general introduction of the architecture we first describe the properties that are relevant for scheduling. Then we derive general requirements for scheduling strategies in metasystems and point out differences to conventional job scheduling for parallel processors. This leads to the metacomputing scheduling concept of NWIRE which is based on a brokerage and trading approach.
Conference Paper
Resource discovery in large-scale distributed systems is a challenging issue, mainly due to the traditional centralized topologies in which the nodes of these systems are typically organized. This unified approach has been proven to be effectual for cluster-based grids and clouds but questionable for large scale, heterogeneous and dynamically interoperable e-infrastructure such as grids and inter-clouds. The latter e-infrastructures are less tolerant with regards to scalability, elasticity and flexibility. In this work, we explore the decentralized distributed search protocols that transform nodes of large size distributed systems to act as both clients and servers. Specifically, we extend the use of a job profile specification that is generated during the user job submission process. The exploitation of the job profile will allow us to orchestrate and group different user submissions into transient inter-cloud brokering groups that represent a temporary resource cluster according to the current service submission characteristics. We suggest the clustering submissions in an inter-cloud system with the view of forming notional short-term grouped nodes that may reform over the time. In this way, the resource discovery process is becoming dynamically, that is to say, a transient group of nodes is required advancing a single request based on current time, opposed to multiple requests as currently happens. We further propose an architecture that implies nodes orchestration based on previous resource requests as well as we model a service meta-registry for inter-cloud systems to store relevant information to service submission.
Conference Paper
The resources held by a single cloud are usually limited and at peak period, the organization may not be able to give the guaranteed services due to insufficient provisioning of resources. So it is essential to organize cloud systems that complement each other such as to procure resources from other participating cloud systems. However, it is difficult to provide the right resources from various cloud providers because management policies and descriptions about various resources are different in each organization. Having these differences, it is hard to provide interoperability among them. Representing cloud environment through ontology can conceptualize common attribute among the various resources semantically. Considering this fact, we propose an Inter-cloud Resource Provisioning System (IRPS) in which the resources and tasks are described semantically and stored using resource ontology and the resources are assign edusing a set of inference rules and a semantic scheduler.
Article
Though Cloud computing itself has many open problems, researchers in the field have already made the leap to envision Inter-Cloud computing. Their goal is to achieve better overall Quality of Service (QoS), reliability and cost efficiency by utilizing multiple clouds. Inter-Cloud research is still in its infancy and the body of knowledge in the area has not been well defined yet. In this work, we propose and motivate taxonomies for Inter-Cloud architectures and application brokering mechanisms. We present a detailed survey of the state of the art in terms of both academic and industry developments (20 projects) and we fit each project onto the discussed taxonomies. We discuss how the current Inter-Cloud environments facilitate brokering of distributed applications across clouds considering their non-functional requirements. Finally, we analyse the existing works and identify open challenges and trends in the area of Inter-Cloud application brokering.
Article
The management of internal resources in large-scale environments is a crucial challenge due to the large number of users and service requests. In clouds, an efficient resource manager orchestrates internal resources by assigning brokers to users for acting on behalf of their clients. This is to map user requests to cloud datacenters for service allocation and execution. However, as cloud computing matures, it is crucial to enable the concept of inter-clouds, that is to say, enabling the collaboration and thus, the interoperation between several disperse (and highly likely heterogeneous) clouds. To this extend, we introduce the meta-broker concept for inter-cloud settings by modeling its conception in a total decentralized fashion. This is to coordinate different clouds brokers for establishing a reactive cross-exchange and service automation while offering transparency to users. We simulate an inter-cloud for measuring the performance of the average execution time for various users that submit concurrently a massive amount of services. The results show effective performance levels when operating under meta-brokering solution.
Conference Paper
Cloud Computing has caught the fancy of the research community, big technology companies, application developers and consumers with its promise of on-demand computing and an intuitive service delivery model. While large corporations like Google have invested in creating their million-server warehouses to cater to the tremendous expected demand for cloud resources, smaller service providers may not have enough resources to cater to the “elasticity” inherent in the cloud model. The ability to dynamically provision resources in the face of a volatile resource requests is one of the key performance indicators for cloud service providers and remains a challenge. It is always theoretically possible that the resource requirement exceeds the physically available resources, especially when flash-crowd scenarios are factored in. Catering to peak expected resource requirements by provisioning surplus resources is not a cost-effective strategy especially for smaller cloud service providers. This paper proposes the C2C framework or the Cloud-to-Cloud network of cloud service providers; a shared ecosystem of pooled compute resources. Resource requests which cannot be provisioned from within the dedicated resources of the service provider can be met from the shared pool of C2C resources in a seamless manner. Simulation shows that the proposed framework effectively meets volatile resource requirements, allowing cloud service providers to scale effectively.
Conference Paper
Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.
Intercloud: Scaling of applications across multiple cloud computing environments pp 13-31
  • R Buyya
  • R Ranjan
Buyya, R., Ranjan, R., and Calheiros, R. 2010. Intercloud: Scaling of applications across multiple cloud computing environments pp 13-31. In ICA3PP 2010. 10th International Conference on Algorithms and Architectures for Parallel Processing.
He has presented and published a number of papers in various reputed International Conferences His area of interest includes Cloud Computing and Distributed Systems
  • Ankur Gupta
Ankur Gupta. He has presented and published a number of papers in various reputed International Conferences (IEEE, Springer, ACM etc). His area of interest includes Cloud Computing and Distributed Systems.