Figure 4 - uploaded by David Ebo Adjepon-Yamoah
Content may be subject to copyright.
Typical architecture for a parallel and distributed program. 

Typical architecture for a parallel and distributed program. 

Source publication
Thesis
Full-text available
This document outlines my MSc Dissertation Project, the purpose of which is to develop a tool to support teaching of Concurrent Programming. The teaching tool when developed will aid mostly Java and C++ students to be able to further understand the benefits of Concurrent Programs. The tool provides an opportunity to test the synchronised Concurrent...

Contexts in source publication

Context 1
... areas covered in this project are researched to ascertain all the available information that is relevant in this academic exercise. A background research has been undertaken in some areas that is considered thematic and hence, requires a thorough and detailed descripti on to facilitate the distinct definition of the project’s domain. These thematic areas are described below; The concept of concurrency has been a part of our lives in the things we do and observe in our environments. Snow (1992) reflects that the idea of different tasks being carried out at the same time, in order to achieve a particular end result more quickly, has been with us from time immemorial. Sometimes the tasks may be regarded as independent of one another. He provides an analogy of two gardeners; one planting potatoes and the other cutting lawn (provided the potatoes are not to be planted on the lawn!), will complete the two tasks in the time it takes to do just one of them. Sometimes the tasks are dependent upon each other, as in a team activity such as is found in a well-run hospital operating theatre. Here, each member of the team has to co-operate fully with the other members, but each member has his/her own well-defined task to carry out. [13] Computers also at various points in their operations execute tasks concurrently. “ The activity described by a computer program can also be sub divided into simpler activities, each described by a subprogram. In traditional sequential programs, these subprograms or procedures are executed one after the other in a fixed order determined by the program and its input. The execution of one procedure does not overlap in time with another. With concurrent programs, computational activities are permitted to overlap in time and the subprogram executions describing these activities proceed concurrently. ” [14] Concurrent program execution is usually made up of many programs or subprograms execution by one or more processors. A processor is capable of executing concurrent programs as it can interleave “the instructions from multiple program executions ” [14] to simulate concurrency and hence, it creates an illusion of parallel execution. Concurrent programs can be executed on a single processor by interleaving the execution steps of each process in a time-slicing way, or can be executed in parallel by assigning each computational process to one of a set of processors that may be close or distributed across a network, as shown in Figure 4 below. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[15] The execution of a program or a subprogram is referred to as a Process. Pure concurrency or parallel execution is achieved when multiple processors are used to execute instructions of concurrent processes at the same time. Such an activity is depicted as “DISTRIBUTED APPLICATION” in Figure 4 above. This project focuses on executions conducted by either one processor or dual cores (i.e. two processors). This is as a result of the fact that most computers available for teaching and learning concurrency programming in higher institutions mainly have a processor or dual cores. In a white paper published by Microsoft Corporation in 2007 which was titled “ THE MANYCORE SHIFT: Microsoft Parallel Computing Initiative Ushers Computing into the Next Era ”, provides a flashback on the advances made over the years with regards to concurrent or parallel computing. It said that the interest in parallel computing dates back to the late 1950’s, with advancements surfacing in the form of supercomputers throughout the 60’s and 70’s. These were shared memory multiprocessors, with multiple processors working side-by-side on shared data. In the mid 1980’s, a new kind of parallel computing was launched when the Caltech Concurrent Computation project built a supercomputer for scientific applications from 64 Intel 8086/8087 processors. This system showed that extreme performance could be achieved with mass market, off the shelf microprocessors. These massively parallel processors ( MPPs ) came to dominate the top end of computing, with the ASCI Red supercomputer computer in 1997 breaking the barrier of one trillion floating point operations per second. Since then, MPPs have continued to grow in size and power. Today, parallel computing is becoming mainstream based on multi-core processors. Fortunately, the continued transistor scaling predic ted by Moore’s Law will allow for a tran sition from a few cores to many [17]. Figure 5 below shows a summary of the generations of computing systems with their accompanying experiences and benefits to users. Information Technology is currently indispensable in all facets of life, and every sector of the global economy is heavily dependent on it. The Global Information Technology Report states in its 2012 edition that i n 2001, when the World Economic Forum first published The Global Information and Technology Report (GITR), the dot-com bubble had just burst; there were fewer than 20 million mobi le phone users in all of Africa; and Apple Inc.’s product line was confined to Macintosh computers. That Report presented an optimistic view of the future, highlighting the transformational potential of information and communication technologies (ICT) in advancing the progress of global society and business. Today there are more than 500 million mobile phone subscribers in Africa, and Apple is the world’s la rgest company in market capitalization, producing iPhones, iPods, and iPads along with Mac computers. Despite the strides the sector has made since the technology bust in 2001, however, we believe we are only just beginning to feel the impact of digitization — the mass adoption by consumers, businesses, and governments of smart and connected information communication technology (ICT).[18] spreading all over the world . “ The Information Technology (IT) sector has been a leading driver of economic growth in the modern world. As recent downturns have shown, a significant drop in the IT sector has widespread ramifications for all areas of our economy ” [19] . There In Hughes the core (2003) of such says fast that paced programs development that are is the properly concept designed of concurrency to take or advantage parallel computing. of parallelism The fate can execute of much faster of the than IT industry their sequential rests on the counterparts, success of main-streaming which is a market parallel advantage. (or concurrent) In other computing cases the speed ” [19]. is used Cameron to save Hughes lives. In and such Tracey situations, Hughes faster explain means below better. the The pivotal solutions role of to parallel certain problems computing are in represented their book more titled naturally “Parallel and as a Distributed collection of Programming simultaneously Using executing C++”, and tasks. it is This supported is especially by Figure 1. the case in many areas of scientific, mathematical, and artificial intelligence programming. This means that parallel programming techniques can save the software developer work in some situations by allowing the developer to directly implement data structures, algorithms, and heuristics developed by researchers. Specialized hardware can be exploited. For instance, in high-end multimedia ...
Context 2
... areas covered in this project are researched to ascertain all the available information that is relevant in this academic exercise. A background research has been undertaken in some areas that is considered thematic and hence, requires a thorough and detailed descripti on to facilitate the distinct definition of the project’s domain. These thematic areas are described below; The concept of concurrency has been a part of our lives in the things we do and observe in our environments. Snow (1992) reflects that the idea of different tasks being carried out at the same time, in order to achieve a particular end result more quickly, has been with us from time immemorial. Sometimes the tasks may be regarded as independent of one another. He provides an analogy of two gardeners; one planting potatoes and the other cutting lawn (provided the potatoes are not to be planted on the lawn!), will complete the two tasks in the time it takes to do just one of them. Sometimes the tasks are dependent upon each other, as in a team activity such as is found in a well-run hospital operating theatre. Here, each member of the team has to co-operate fully with the other members, but each member has his/her own well-defined task to carry out. [13] Computers also at various points in their operations execute tasks concurrently. “ The activity described by a computer program can also be sub divided into simpler activities, each described by a subprogram. In traditional sequential programs, these subprograms or procedures are executed one after the other in a fixed order determined by the program and its input. The execution of one procedure does not overlap in time with another. With concurrent programs, computational activities are permitted to overlap in time and the subprogram executions describing these activities proceed concurrently. ” [14] Concurrent program execution is usually made up of many programs or subprograms execution by one or more processors. A processor is capable of executing concurrent programs as it can interleave “the instructions from multiple program executions ” [14] to simulate concurrency and hence, it creates an illusion of parallel execution. Concurrent programs can be executed on a single processor by interleaving the execution steps of each process in a time-slicing way, or can be executed in parallel by assigning each computational process to one of a set of processors that may be close or distributed across a network, as shown in Figure 4 below. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[15] The execution of a program or a subprogram is referred to as a Process. Pure concurrency or parallel execution is achieved when multiple processors are used to execute instructions of concurrent processes at the same time. Such an activity is depicted as “DISTRIBUTED APPLICATION” in Figure 4 above. This project focuses on executions conducted by either one processor or dual cores (i.e. two processors). This is as a result of the fact that most computers available for teaching and learning concurrency programming in higher institutions mainly have a processor or dual cores. In a white paper published by Microsoft Corporation in 2007 which was titled “ THE MANYCORE SHIFT: Microsoft Parallel Computing Initiative Ushers Computing into the Next Era ”, provides a flashback on the advances made over the years with regards to concurrent or parallel computing. It said that the interest in parallel computing dates back to the late 1950’s, with advancements surfacing in the form of supercomputers throughout the 60’s and 70’s. These were shared memory multiprocessors, with multiple processors working side-by-side on shared data. In the mid 1980’s, a new kind of parallel computing was launched when the Caltech Concurrent Computation project built a supercomputer for scientific applications from 64 Intel 8086/8087 processors. This system showed that extreme performance could be achieved with mass market, off the shelf microprocessors. These massively parallel processors ( MPPs ) came to dominate the top end of computing, with the ASCI Red supercomputer computer in 1997 breaking the barrier of one trillion floating point operations per second. Since then, MPPs have continued to grow in size and power. Today, parallel computing is becoming mainstream based on multi-core processors. Fortunately, the continued transistor scaling predic ted by Moore’s Law will allow for a tran sition from a few cores to many [17]. Figure 5 below shows a summary of the generations of computing systems with their accompanying experiences and benefits to users. Information Technology is currently indispensable in all facets of life, and every sector of the global economy is heavily dependent on it. The Global Information Technology Report states in its 2012 edition that i n 2001, when the World Economic Forum first published The Global Information and Technology Report (GITR), the dot-com bubble had just burst; there were fewer than 20 million mobi le phone users in all of Africa; and Apple Inc.’s product line was confined to Macintosh computers. That Report presented an optimistic view of the future, highlighting the transformational potential of information and communication technologies (ICT) in advancing the progress of global society and business. Today there are more than 500 million mobile phone subscribers in Africa, and Apple is the world’s la rgest company in market capitalization, producing iPhones, iPods, and iPads along with Mac computers. Despite the strides the sector has made since the technology bust in 2001, however, we believe we are only just beginning to feel the impact of digitization — the mass adoption by consumers, businesses, and governments of smart and connected information communication technology (ICT).[18] spreading all over the world . “ The Information Technology (IT) sector has been a leading driver of economic growth in the modern world. As recent downturns have shown, a significant drop in the IT sector has widespread ramifications for all areas of our economy ” [19] . There In Hughes the core (2003) of such says fast that paced programs development that are is the properly concept designed of concurrency to take or advantage parallel computing. of parallelism The fate can execute of much faster of the than IT industry their sequential rests on the counterparts, success of main-streaming which is a market parallel advantage. (or concurrent) In other computing cases the speed ” [19]. is used Cameron to save Hughes lives. In and such Tracey situations, Hughes faster explain means below better. the The pivotal solutions role of to parallel certain problems computing are in represented their book more titled naturally “Parallel and as a Distributed collection of Programming simultaneously Using executing C++”, and tasks. it is This supported is especially by Figure 1. the case in many areas of scientific, mathematical, and artificial intelligence programming. This means that parallel programming techniques can save the software developer work in some situations by allowing the developer to directly implement data structures, algorithms, and heuristics developed by researchers. Specialized hardware can be exploited. For instance, in high-end multimedia programs the logic can be distributed to specialized processors for increased performance, such as specialized graphics chips, digital sound processors, and specialized math processors. These processors can usually be accessed simultaneously. Computers with MPP (Massively Parallel Processors) have hundreds, sometimes thousands of processors and can be used to solve problems that simply cannot realistically be solved using sequential methods. With MPP computers, it’s the combination of fast with pure brute force that makes the impossible ...

Similar publications

Article
Full-text available
In this work we present a human pose estimation method based on the skeleton fusion and tracking using multiple RGB-D sensors. The proposed method considers the skeletons provided by each RGB-D device and constructs an improved skeleton, taking into account the quality measures provided by the sensors at two different levels: the whole skeleton and...
Conference Paper
Full-text available
Cloud services have been actively used for transactional and batch workloads. Recently, multi-threaded high-performance computing (HPC) workloads have started to emerge on the cloud as well. Unlike most traditional data center loads, HPC workloads highly utilize the servers. The energy efficiency and performance of HPC loads, however, vary strongly...
Article
Full-text available
The process allocation is a problem in high performance computing, especially when using heterogeneous architectures involving diverse performance characteristics such as number of cores and their frequencies, multithreading technologies, cache memory etc. In order to improve the application performance, it is necessary to consider which processing...
Article
Full-text available
provides insight into both the longer-term range of riv-erine forms and processes under a similar hydroclimatic regime and the underlying landscape template for resto-ration. Along the continuum of restoration from purely process-based modeling to restoring to a reference condi-tion, analysis of the historical range of variability of channel planfo...
Article
Full-text available
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach....

Citations

Article
Full-text available
Endüstri 4.0 ile sanayide üretime yönelik verimliliğe odaklanılırken bireyin göz ardı edildiği belirtilmektedir. Birey merkeze alınarak geliştirilen Toplum 5.0 planına göre ise Endüstri 4.0’ın yarattığı hasarın da giderildiği mutlu ve müreffeh bir toplum oluşturulmak istenmektedir. Bu geleceğin inşasına yönelik kararlı bazı çalışmalar mevcuttur. Fakat Toplum 5.0’ın nasıl inşa edileceğine dair literatürde tatmin edici bir çalışmaya rastlanmamaktadır. Bu da Toplum 5.0’a ulaştırabilecek bir yöntem önerisinin gerekliliğini ortaya koymaktadır. Hali hazırdaki çalışmada sosyal bilişim bilimi sınırları içinde kalınarak bu geleceğe ulaştırabilecek bu yöntem açıklanmak istenmektedir. Analitik bir yaklaşımla ve nitel araştırma teknikleri kullanılarak yapılan araştırma sonucunda, ACP yaklaşımın üç farklı sistemin birleştirilmesiyle oluşturulan CPSS’in alt yapısını kullanarak yönetim ve kontrol sorunlarında etkili bilimsel çözümler sunduğu anlaşılmaktadır. Burada toplumsal etkileşimler modellenerek mikro görünümde gözlemlenebilmektedir. Ortamdaki işbirlikçi zekâ ile insanın ortak yönetimi ile bireysel, kurumsal ve yönetimsel yetenekler geliştirilebilmektedir. Bu durum tasarlanarak inşa edilmek istenen Toplum 5.0 gibi akıllı toplumlara ve ötesine ulaşmanın bu yöntemle mümkün olduğunu göstermektedir. Diğer taraftan sosyal sistemlerin modellenerek sosyal hesaplamanın mümkün olduğunu veren bu sonuç aynı zamanda, ACP’nin sosyal bilimler alanına deney ve gözlem yapabilme imkânı tanıdığı da ortaya çıkmaktadır.
Article
Fine-grained lock is frequently used to mitigate lock contention in the multithreaded program running on a shared-memory multicore processor. However, a concurrent program based on the fine-grained lock is hard to write, especially for beginners in the concurrent programming course. How to help participants learn fine-grained lock has become increasingly important and urgent. To this end, this paper presents a novel refactoring-based approach to enhance the learning effectiveness of fine-grained locks. Two refactoring tools are introduced to provide illustrating examples for participants by converting original coarse-grained locks into fine-grained ones automatically. Learning effectiveness and limitations are discussed when refactoring tools are applied. We evaluate students' outcomes with two benchmarks and compare their performance in Fall 2018 with those in Fall 2019. We also conduct experiments on students' outcomes by dividing them into two groups (A and B) in a controlled classroom where participants in group A learn the fine-grained locks with the help of refactoring tools while those in group B do not access these tools. Evaluation of the results when they have been taught with the refactoring-based approach reveals a significant improvement in the students' learning.