Figure 3 - uploaded by Hussam Fakhouri
Content may be subject to copyright.
CACS self-healing processes. 

CACS self-healing processes. 

Source publication
Article
Full-text available
This work proposes the adoption of Autonomic Computing System (ACS) in Cloud environment. ACS was first introduced by IBM to create systems capable of managing automatic self-configuration, self-healing, self-optimization and self-protection. These systems detect errors that cause failure, and then recover and reconfigure itself. The concept is wil...

Context in source publication

Context 1
... phase consists of four basic processes; it starts by monitoring and ends by fixing process. Figure 3 illustrates the four main processes for CACS Self-Healing. ...

Citations

... Therefore, some important challenges must be addressed in order to efficiently construct schedule algorithms. Those are Fairness, Data locality, Availability, Resource Utilization, Throughput and Synchronization [14,7,26,27]. ...
Article
Full-text available
Rapid advancements in Big data systems have occurred over the last several decades. The significant element for attaining high performance is "Job Scheduling" in Big data systems which requires more utmost attention to resolve some challenges of scheduling. To obtain higher performance when processing the big data, proper scheduling is required. Apache Hadoop is most commonly used to manage immense data volumes in an efficient way and also proficient in handling the issues associated with job scheduling. To improve performance of big data systems, we significantly analyzed various Hadoop job scheduling algorithms. To get an overall idea about the scheduling algorithm, this paper presents a rigorous background. This paper made an overview on the fundamental architecture of Hadoop Big data framework, job scheduling and its issues, then reviewed and compared the most important and fundamental Hadoop job scheduling algorithms. In addition, this paper includes a review of other improved algorithms. The primary objective is to present an overview of various scheduling algorithms to improve performance when analyzing big data. This study will also provide appropriate direction in terms of job scheduling algorithm to the researcher according to which characteristics are most significant
Article
Full-text available
Service oriented architecture (SOA) is a form of software design in which application component supply services to other components through a network communication protocol, it has many services that can transfer small data with communication channels or additional services which bring into a relationship that ensure efficiency of service activities, SOA simplify the structure of loosely coupled applicable applications and enable contribution for enterprise working of services together. In order to assure the effectiveness of Service oriented architecture we have to confirm service composition which is the collection of services together in order to perform a specific function which can be used in service oriented architecture. In this paper we proposed a Service composition in SOA, it is present service composition with various techniques used for composing services and provided a comparison between them.
Article
Full-text available
Hadoop is a cloud computing open source system, used in large-scale data processing. It became the basic computing platforms for many internet companies. With Hadoop platform users can develop the cloud computing application and then submit the task to the platform. Hadoop has a strong fault tolerance, and can easily increase the number of cluster nodes, using linear expansion of the cluster size, so that clusters can process larger datasets. However Hadoop has some shortcomings, especially in the actual use of the process of exposure to the MapReduce scheduler, which calls for more researches on Hadoop scheduling algorithms.This survey provides an overview of the default Hadoop scheduler algorithms and the problem they have. It also compare between five Hadoop framework scheduling algorithms in term of the default scheduler algorithm to be enhanced, the proposed scheduler algorithm, type of cluster applied either heterogeneous or homogeneous, methodology, and clusters classification based on performance evaluation. Finally, a new algorithm based on capacity scheduling and use of perspective resource utilization to enhance Hadoop scheduling is proposed.