Figure 1 - uploaded by Mostafa Hagog
Content may be subject to copyright.
High-level design of interblock scheduler 

High-level design of interblock scheduler 

Source publication
Article
Full-text available
The GCC (GNU Compiler Collection) project of the Free Software Foundation has resulted in one of the most widespread compilers in use today that is capable of generating code for a variety of platforms. Since 1987, many volunteers from academia and the private sector have been working to continuously improve the functionality and quality of GCC. So...

Context in source publication

Context 1
... speculative moves we determine the conditional execution probability, where to check and update life information, 31 and if loads are exception free (i.e., executing the load will not cause an excep- tion). The high-level design of the interblock scheduler is shown in Figure 1. The steps of computing flow-related information and data-de- pendency information are independent and can be executed either in order or in parallel. ...

Similar publications

Article
Full-text available
To illustrate and broaden knowledge on some aspects of physics at teaching level, that is, university level and higher level, Javaoptics applets was adopted as free software under a GNU General Public License, an open-source license. This applet was used to shows multiple beaminterferences froma parallel dielectric thin filmand to study the evoluti...
Book
Full-text available
Se otorga permiso para copiar, Distribuir y/o modificar este Documento bajo los términos de la Licencia de Documentación Libre GNU, Versión 1.1 o cualquier otra versión posterior publicada por la Free Software Foundation; sin Secciones invariantes.
Thesis
Full-text available
This research strives to address the gap in the literature surrounding companies which identify with the philosophical values associated with the Free Software movement, which have historically been associated with Open Source businesses.
Conference Paper
Full-text available
Multilingualism is a reality in the XXIst Century and New Technolo-gies reveal as a new powerful way to cope with its main issues and the challenges its treatment implies. In this sense a great amount of work has been carried out for the last twenty years in the field of Language Engineering and Applied Linguistics. A big effort has been made to de...
Article
Full-text available
Drawing on interviews with developers and close readings of site interfaces and architectures, this essay explores four Twitter alternatives: Twister, rstat.us, GNU social (a Free Software Foundation microblogging software project) and Quitter (a specific installation of GNU social). The interviews and analyses of these Twitter alternatives reveal...

Citations

... Typical compiler optimizations perform this only for individual store instructions, or intrinsics such as memset. While modern compiler transforms attempt to convert some loops to memset calls [18], this is only possible if a single-byte (or in some cases, two-byte) pattern is used. This is insufficient for many common cases, such as initializing an array of (fourbyte) integer values to the value '1', as shown in Figure 5. ...
Conference Paper
Full-text available
Usage of uninitialized values remains a common error in C/C++ code. This results not only in undefined and generally undesired behavior, but is also a cause of information disclosure and other security vulnerabilities. Existing solutions for mitigating such errors are not used in practice as they are either limited in scope (for example, only protecting the heap), or incur high runtime overhead. In this paper, we propose SafeInit, a practical protection system which hardens applications against such undefined behavior by guaranteeing initialization of all values on the heap and stack, every time they are allocated or come into scope. Doing so provides comprehensive protection against this class of vulnerabilities in generic programs, including both information disclosure and re-use/logic vulnerabilities. We show that, with carefully designed compiler optimizations, our implementation achieves sufficiently low overhead (<5% for typical server applications and SPEC CPU2006) to serve as a standard hardening protection in practical settings. Moreover, we show that we can effortlessly apply it to harden non-standard code, such as the Linux kernel, with low runtime overhead.
... • GCC: Oedipus uses GCC [16] to compile the obfuscated programs in order to extract static and dynamic raw data from the obfuscated programs' executables. ...
Conference Paper
Obfuscation is a mechanism used to hinder reverse engineering of programs. To cope with the large number of obfuscated programs, especially malware, reverse engineers automate the process of deobfuscation i.e. extracting information from obfuscated programs. Deobfuscation techniques target specific obfuscation transformations, which requires reverse engineers to manually identify the transformations used by a program, in what is known as metadata recovery attack. In this paper, we present Oedipus, a Python framework that uses machine learning classifiers viz., decision trees and naive Bayes, to automate metadata recovery attacks against obfuscated programs. We evaluated Oedipus' performance using two datasets totaling 1960 unobfuscated C programs, which were used to generate 11.075 programs obfuscated using 30 configurations of 6 different obfuscation transformations. Our results empirically show the feasibility of using machine learning to implement the metadata recovery attacks with classification accuracies of 100% in some cases.
... GCC is arguably the most utilized, publicly available compiler in use today. It is provided as the default compiler on most Linux systems and it is a highly developed library which has been in production since the 80's [7]. Because it has been in production for such a long time, we could use it in combination with the SPEC CPU2006 Benchmark Suites and two vastly different hardware platforms to determine whether the de facto standard compiler had improved its performance-based tuning for specific hardware over the last decade. ...
Article
La optimización en el tiempo de compilación del código puede resultar en ganancias de rendimiento significativas. La cantidad de dichas ganancias varía ampliamente dependiendo de código a ser optimizado, el hardware para el que se compila, el aumento que se pretende en el desempeño (e.g. velocidad, rendimiento, utilización de la memoria, etc.) y el compilador utilizado. Se ha utilizado la versión más reciente de la suite de benchmarks SPEC CPU 2006 para ayudar a adquirir la comprensión de las mejoras posibles en el desempeño utilizando las opciones GCC (GNU Compiler Collection) que se concentran principalmente en las ganancias de velocidad fueron posibles ajustando el compilador con los niveles de optimización del compilador estándar así como una opción de compilador específica para el procesador de hardware. Se compararon las opciones más estandarizadas de ajuste obtenidas para un procesador core i7, para las mismas opciones relativas utilizadas sobre un Pentium4 para determinar si el proyecto GNU ha mejorado sus capacidades de ajuste de desempeño para el hardware especifico en el tiempo.
... First, the people input into an open source system can be provided by a proprietary system ( Figure 4) or another open source system. In the case of a proprietary system, this input can come in the form of a formal commitment, for example, when a commercial company participates in an open source project (O'Mahony, 2007;Dahlander, 2007;Dahlander and Wallin, 2006;Lerner et al., 2006;Mustonen, 2005;Edelsohn et al., 2005;Grand et al., 2004;de Joode, 2004), or it can be informal, such as when a commercial developer spends part of his or her paid work time contributing to an open source project (Lakhani and Wolf, 2005). For example, Oracle Corporation is involved either directly or indirectly in more than 700 open source community projects. ...
... , Tsantalis and Chatzigeorgiou (2009), Ayewah et al. (2008), Capra et al. (2008), Del Grosso et al. (2008, Koch and Neumann (2008), Sohn and Mok (2008), Wray and Mathieu (2008), Aberdour (2007), Ajila and Wu (2007), Cetin and Gokturk (2007), Falzone et al. (2007), Higo et al. (2007), Koru and Liu (2007), Muller-Prove (2007), Oh and Jeon (2007), Sampson (2007), Sen (2007), Wheeler (2007), Duguid (2006), Forge (2006), Goh (2006), O'Hanlon (2006), Stewart and Gosain (2006), Turnu et al. (2006), Yu et al. (2006), Alpern et al. (2005), Chan et al. (2005), Edelsohn et al. (2005), Falcioni (2005), Goldsborough (2005a), Gyimothy (2005), Koru and Tian (2005), Lerner and Tirole (2005), Tsantalis et al. (2005), Uchida et al. (2005), Wrosch (2005), Glance (2004), Lussier (2004), Messerschmitt (2004), Norris (2004), Paulson et al. (2004), Raymond (2004), Raymond and Messerschmitt (2004), Ruffin and Ebert (2004), Samoladas et al. (2004), , Bayrak and Davis (2003), Fuggetta (2003), Nichols and Twidale (2003), Zhao and Elbaum (2003), Stamelos et al. (2002), Jørgensen (2001), Castelluccio (2000), Neumann Au et al. (2009), Ayewah et al. (2008), Crowston and Scozzi (2008), Kidane and Gloor (2007), Long and Siau (2007), Nizovtsev and Thursby (2007), Louridas (2006), Falcioni (2005), Gyimothy et al. (2005), Louridas (2005), Remillard (2005), Glance (2004), Koru and Tian (2004), Huntley (2003), Bollinger et al. (1999), McConnell (1999). ...
Article
Full-text available
The open source movement has grown steadily and matured in recent years, and this growth has been mirrored by a rise in open source related research. The objective of this paper is to pause and reflect on the state of the field. We start by conducting a comprehensive literature review of open source research, and organize the resulting 618 peer-reviewed articles into a taxonomy. Elements of this taxonomy are defined and described. We then draw on a number of existing categorization schemes to develop a framework to situate open source research within a wider nomological network. Building on concepts from systems theory, we propose a holistic framework of open source research. This framework incorporates current research, as represented by the taxonomy, identifies gaps and areas of overlap, and charts a path for future work.
... Many compilers offer an option to trade off floating point accuracy for speed. The GNU Compiler Collection (GCC) [3] for example offers a flag called -ffast-math. When this compiler flag is on, the compiler uses speed optimizations that can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO specifications for math functions. ...
Conference Paper
Full-text available
Many multimedia applications rely on the computation of logarithms, for example, when estimating log-likelihoods for Gaussian Mixture Models. Knowing of the demand to compute logarithms and other basic math functions rapidly, many hardware manufacturers provide libraries to perform calculations in hardware. Of course, these libraries are especially popular for the use in computer vision or audio analysis algorithms where a large amounts of data have to be processed. A downside of using specialized hardware though is that it increases the investment cost and the user is forced to use the same hardware, which is especially cumbersome when algorithms optimized for different specialized hardware are to be combined. This article presents the realization of a novel platform-independent, fast C-language implementation of the logarithm function. The idea behind the approach is to take advantage of the large amount of cache available in current processors. The logarithm implementation is compared to the current state of the art and we demonstrate the practical use of the algorithm in an actual speech analysis application.
... 3. We also can use GCC's "labels as value" [68], which is a C extension that allows a pointer to hold an address and to goto it without a call in C. ...
... label as values [68]. This extension allows one to store a label in a variable, and they branch to it later on. ...
... Objetos C++ necessitam de uma biblioteca externa para o suporte de checagem de tipo em tempo de execução, e para a utilização de exceções. Essa biblioteca (a libgcc, no caso do compilador GCC [Project, 2004])é escrita em nível de usuário e portanto depende de chamadas ao sistema, e portanto de um sistema operacional. ...
Article
Full-text available
O sistema operacional tem como responsabilidade gerenciar de forma eficiente recursos disponíveis no sistema computacional para satisfazer da melhor forma possível as necessidades das aplicações. Nesse contexto, a construção de sistemas de arquivos é uma áreabastante rica em estudo dado o impacto que a manipulação de arquivos exerce no desempenho de uma larga gama de aplicações. A implementação de sistemas de arquivos sofre em complexidade pela necessidade de integração de serviços específicos de arquivamento como criptografia, compressão, replicação e distribuição. As dificuldades em oferecer de forma eficiente serviços de arquivos se tornam ainda mais nítidas em cenários em que as aplicações apresentam necessidades ou comportamentos distintos ou mesmo conflitantes. Este projeto de pesquisa investiga uma solução para esses dois problemas essenciais em serviços de arquivos - evolução e customizabilidade. Propomos o uso de flexibilidade com granularidade fina como base de uma arquitetura de sistemas de arquivos. Este trabalho apresenta o estudo, desenvolvimento e implementação do sistema de arquivos K42 File System (KFS), disponível para os sistemas operacionais K42 e Linux. O KFS usa uma arquitetura baseada em componentes extremamente simples e leve para obter a flexibilidade com granularidade fina. Com essa arquitetura flexível, é possível implementar qualquer arquivo ou diretório do sistema de arquivos como uma composição de co. Dissertação (Mestrado).
Article
This article is devoted to the study of the efficiency of the multiply-add operation instruction on the Baikal-T processor. Various examples of using the command are considered, measurements are made and conclusions are formulated in which cases the use of multiply-add operation gives a gain in calculations and in which situations the use of the command is unprofitable in terms of program execution speed.