Fig 1 - uploaded by Amir Ehsani Zonouz
Content may be subject to copyright.
Memory hierarchy design 

Memory hierarchy design 

Source publication
Article
Full-text available
Developing widely useful mobile computing applications presents difficult challenges. On one hand, mobile users demand fast response times, and deep relevant content. On the other hand, mobile devices have limited storage, power and communication resources. Caching frequently accessed data items on the mobile client is an effective technique to imp...

Context in source publication

Context 1
... I NTRODUCTION T oday microprocessors progress and causes development increasing of used technologies gap between in processors and memories speed. This matter necessitates hierarchical memories design with different sizes and speeds for each memory level. Fig. 1 demonstrates a simple memory hierarchy design. A lot of researches have been done on measuring and recommending near optimal cache configurations from a power consumption point of view. The growing demand for embedded computing platforms, mobile systems, general- purpose handheld devices encourages us to work more on the cache performance [13]. For instance, in [1], the authors determined that high performance caches were also the lowest power consuming caches since they reduce the traffic to the lower level of the memory system. In this process parameters like block size, number of sets and replacement algorithms are very effective in increasing accesses to memories. Using improper and inefficient replacement policies in cache structures causes increasing miss rates and therefore increasing access time to a desired block and power consumption too. Therefore selection of an efficient replacement policy has a key role in decreasing hit time and power consumption in memories. Many algorithms can be driven by the use of caches for memory pages in Operating System (OS) implementations, which have different access characteristics than the canonical problems and share many of the same principles [2][15]. Existing cache replacement algorithms do not differentiate between instruction cache and data cache blocks. But it is known that there is a very important difference: Instructions usually are not modified. This fact guides us in developing distinct algorithms with their special features for each of the data and instruction caches. Most of the tools for calculation of power consumption couldn't support our new replacement policies easily. Therefore we decided to modify one of the available simulation tools namely SimpleScalar [3] and added a power support section to it in order to be able to simulate the algorithms. SimpleScalar is an execution-driven simulator that uses binaries compiled to a MIPS-like target. SimpleScalar can accurately model a high-performance, dynamically-scheduled and multi-issue processor. The rest of this paper is organized as follows. Section II studies some existing cache replacement policies that are used in modern systems. Section III proposes three new modified replacement policies and describes them thoroughly. Section IV describes our power formulation in our modified power tool. In section V we evaluate our proposed replacement policies and compare them with the previous implemented algorithms in this tool and analyze the results. Finally, Section VI summarizes the paper and shows our roadmap for future ...

Similar publications

Technical Report
Full-text available
In this study, the risk assessment capabilities of the RAPID-N were improved by designing and implementing features needed for analysing flood impacts on fixed industrial installations. With the developed prototype, which will gradually replace the existing version currently available on-line, the JRC is able to provide a tool to industry and autho...

Citations

... Three important replacement algorithms, Not Recently Used (NRU), Enhanced Not Recently Used (ENRU), and ENRU new, are addressed in [6]. NRU is designed to reduce the power consumed in maintaining the least recently used block within the link list in LRU with a status R bit that is set whenever a block has been accessed. ...
... In summary, the design of three layers [16] is adapted and forms the foundation for future enhancement. This research already adopts one enhancement while applying the ENRU algorithm [6] instead of the LRU algorithm in the top layer. The reason behind selecting ENRU is because ENRU is similar in some way to LRU, but without storage to maintain the minimum time requested block; instead, it selects the block according to simple two bits along with writeback addressing. ...
... The proposed approach is implemented using Java in Eclipse environment. The proposed method is developed on top of existing models presented in [10] and [6]. It is designed to handle different sets of inputs to find the optimal case. ...
Chapter
Mobile computing requires adopting various techniques to enhance the performance of the hardware used at the end device. Edge and mobile devices usually have limited resources, compared to data centers and dedicated machines that are designed and build to support huge computations. While the performance of these devices is practically doable through adding more resources, it is also necessary to identify any other incremental improvements that can enhance their performance. memory access is one of the architectural parts that can contribute to this. This work presents a method, called the Enhanced Not Recently Used algorithm (ENRU) replacement algorithm, for reducing energy associated with memory cache system access. The proposed method works with full associative cache and is implemented using three logical layers labeled with the top layer, which has fewer blocks and implemented with ENRU, middle layer with more blocks, and finally, the bottom layers which is implemented using the FIFO algorithm.KeywordsCache memoryEnergy efficiencyENRULRUMemory managementMobile computing
... Even though more and more incredible researches have been done in this area, many challenges still remain. In this section some of them are pointed: poor bandwidth over most networks [37]. Therefore, there is a need to lighten data for easy and fast transfer. ...
Article
Full-text available
It is progressively important for mobile device databases to achieve additional coordination of diverse computerized operations. To do so, many research works have been conducted to attain more companionable solutions that can be used to synchronize data between the mobile device and the server side databases. The objective of this study is to survey the state of the art in various aspects of mobile devices with respect to data sharing and synchronization between mobile device databases and server-side databases as opposed to the server-to-server synchronizations. To achieve this feat, 5 different electronic databases were used to identify primary studies using the relevant keywords and search terms related to mobile data synchronization classified under journals, conferences, symposiums and book chapters. The study produced interesting results in different areas such as data synchronization, data processing, vendor dependency, data inconsistency and conflicts resolutions where 19 primary studies were selected from the search processes. In our conclusion, mobile data synchronization has been significantly discussed in the domain of databases. In general however, it was discovered that, existing synchronization solutions suffer from a number of limitations such as lack of consistency among data, resolving conflicts, data processing responsibility, vendor dependency, data type, bandwidth utilization and network fluctuation during data transmission. Also the applicability of the existing solutions has not been reported yet.
Conference Paper
With the increasing number of mobile devices it is necessary to create adapted solutions for this type of devices that respect their limited processing capacities, memory and bandwidth. We propose a synchronization model based on message digests to synchronize relational databases between mobile devices and a server, in order to minimize the data transferred between the device and the server and to minimize the processing done by the mobile device. The approach consists in a synchronization model where the amount of data to be exchanged has been reduced to the minimum. In order to achieve the model above a registry is done in the device with the new, the modified or removed content, which purpose is to send only the essential data for synchronization from the device to the server. Regarding the synchronization from the server to the device, the modifications are calculated and sent to the device. This model is independent of proprietary solutions and can be adapted to any system or platform, often involving software development to the mobile application and the application server. The results of testing the model show that the proposed model can achieve better synchronization times than the competitor models, while always fulfilling the set objectives.