Gilles Bizot's research while affiliated with Bordeaux INP and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (7)
The coming era of chips consisting of billions of gates foreshadows processors containing thousands of unreliable cores. In this context, high energy efficiency will be available, under the constraint that applications leverage the large amount of computing cores, while masking frequent faults of the chip. In this paper, an high-level method is pro...
The coming era of chips consisting of billions of gates foreshadows processors containing thousands of unreliable cores. In this context, high energy efficiency will be available, under the constraint that applications leverage the large amount of computing cores, while masking frequent faults of the chip. In this paper, an high-level method is pro...
With the advanced technologies (typ. < 32nm), it is more and more difficult to control the manufacturing variabilities. It impacts more severely the working frequency and the consumed energy, and induces more and more failure inside the device. This is particularly true for MPSoC with a large number of computing cores. With the increasing needs (pe...
In previous publications, a self-recovering strategy ([1]), which is able to "re-map" dynamically application tasks on a multi-core system, was presented. Based on run-time failure aware techniques, this Self-Recovering strategy guarantees seamlessly termination and delivering the expected results, despite multiple node and link failures in a 2D me...
In this paper, a Self-Recovering strategy, which is able to "re-map" dynamically application tasks on a multi-core system, is presented. Based on run-time failure aware techniques, this Self-Recovering strategy guarantees seamlessly termination and delivering the expected results despite multiple node and link failures in a 2D mesh topology. It has...
The advent of the Deep Submicron technology opens the way to many-cores processor chips. However, the variability and reliability of these processes poses new challenges. In particular, the mapping of applications will require specific strategies to leverage the plenty and diversity of the computation cores. In this work, a high-level study of the...
As technology scales, designing a massively parallel multi-cores system atop less reliable hardware architecture poses great challenges for researchers and designers. In this environment, ignoring variation effects when scheduling applications or when managing power with Dynamic Voltage and Frequency Scaling (DVFS) is suboptimal. We present a varia...
Citations
... In 2010, the fault-tolerant approach was published at the IEEE International Symposium on Network Computing and Applications in [90]. The variability aware technique was defended in 2 papers at the IEEE International On-Line Testing Symposium in 2011 ( [8]), and 2013 ( [91]). ...
... In a future single-chip massively-parallel tera-device processor consisting in 4000 processing nodes using 250 Mbit memory per node (a total of 1 Terabit memory), and employing ECC-based repair as envisioned in the CELLS framework [23][24][25][26], for the 10 -3 faulty-cell probability each node will experience 1 interruptions for error recovery at every 625 years, while for the 10 -4 faulty-cell probability each node will experience 1 interruptions for error recovery at every 6000 years. Furthermore, as the CELLS framework performs check-point-free error recovery by means of an innovative approach exploiting hierarchical task allocation in the multiprocessor grid [27], performance lost induced by check-pointing is also eliminated. Thus, ECC-based repair reusing ECC implemented for soft-error mitigation is a winning strategy in all above scenarios. ...
... Software techniques offer higher flexibility and/or better capability to adapt to dynamically changing operating conditions. Most prior work targets intermittent timing errors and proposes solutions to prevent errors via careful workload allocation [4,8,16,23,24]. A combined hardware/software approach by Papagiannopoulou et al. [20] proposes a reactive technique that leverages hardware transactional memory (HTM) to rollback a processor's state to a prior safe point if errors are encountered. Specifically, the processor's voltage is scaled down while keeping frequency constant, up to the PoFF, at which time the processor core enters a recovery mode that restores the core to a safe voltage level. ...