Figure 26 - uploaded by Gert Jervan
Content may be subject to copyright.
Cost comparison of different methods. Cost of pseudorandom test is taken as 100%  

Cost comparison of different methods. Cost of pseudorandom test is taken as 100%  

Source publication
Article
Full-text available
The technological development is enabling production of increasingly complex electronic systems. All those systems must be verified and tested to guarantee correct behavior. As the complexity grows, testing is becoming one of the most significant factors that contribute to the final product cost. The established low-level methods for hardware testi...

Similar publications

Technical Report
Full-text available
The modern electronics turns increasingly to the digital technology which presents many advantages on the analog: big insensitivity to the parasites, reconfigurability etc.. Today the digital processing techniques get a major role in public, professional or defense modern electronic systems. Furthermore, the technical realization of specific circui...

Citations

... Agilent, Advantest and Teradyne are example of companies that provide these machines. The ATEs mainly detects failures due to manufacturing defects, aging, environment effects and others [7] and helps manufacturers to maintain their manufacturing tools. They are not practical for prototyping IPs of universities researchers and small companies because of their high cost. ...
Thesis
Full-text available
Conventionally, IC testing and speed characterization is carried out using very expensive Automatic Test Equipments (ATEs). Built-in-self-test (BIST) techniques can also be used as a low-cost solution for at-speed testing. However, BIST may require some modification of the circuit under test (CUT) to coup with the pseudo random nature of the test vectors (what is known as test points insertion). Also, speed characterization can’t be directly carried out by BIST. Other low-cost testing and speed characterization methods are needed especially for developers of circuit IPs in small companies and universities. In this thesis, a special purpose test and characterization processor (TACP) for IC testing and speed characterization has been developed, implemented and tested. The processor utilizes specially developed test support circuitry (TSC) which is fabricated on the chip containing the IPs under test. The TSC, in coordination with the off-chip stand-alone TACP processor, receives test data serially, re-format them, apply them to IPs under test, reformat the test results and send it serially to the test processor. The TSC also include a configurable clock generator which is controlled by the TACP. By controlling the testing frequency and test patterns application, the IPs can be characterized to find their maximum frequency of operation. A proof-of-concept implementation was realized using two FPGA boards; one for the processor and the other to emulate the chip that contains IPs and on-chip circuitry. Also, a complete user interface tool has been developed allowing the user to write, load and administer his/her test program, download test data and receive the test results through a standard PC. http://eprints.kfupm.edu.sa/id/eprint/138864
... With increasing complexity of the designs, test generation at gate level becomes a computationally expensive process. In order to handle complexity issues, hierarchical test generation approaches have been proposed [37]. Such approaches use highlevel functional information to speed up the test generation process. ...
... Although the test patterns generated by LFSR are still pseudo-random, the randomness provided by them is acceptable for BIST technique, considering very low generation cost. In our approach we assume, that PRPG and MISR are implemented on Linear Determining the optimal ratio of pseudorandom and deterministic tests in the final test set is a complex task even for one single core [4]. While considering the core as a part of a SoC with additional constraints makes this task significantly more difficult. ...
... Several Design for Testability (DFT) techniques were proposed to solve the problem, and one of them is internal scan. The general idea behind it is to break the feedback paths and to improve observability and controllability of memory elements by integrating an over-laid shift register called scan path [4]. However, this technique forces designers to accept several aspects increasing total cost of IC, such as increase in silicon area, larger number of pins needed, increase in test application time etc. ...
Conference Paper
Full-text available
During the most recent decades modern equation-based object-oriented modeling and simulation languages, such as Modelica, have become available. This has made it easier to build complex and more detailed models for use in simulation. To be able to simulate such large and complex systems it is sometimes not enough to rely on the ability of a compiler to optimize the simulation code and reduce the size of the underlying set of equations to speed up the simulation on a single processor. Instead we must look for ways to utilize the increasing number of processing units available in modern computers. However to gain any increased performance from a parallel computer the simulation program must be expressed in a way that exposes the potential parallelism to the computer. Doing this manually is not a simple task and most modelers are not experts in parallel computing. Therefore it is very appealing to let the compiler parallelize the simulation code automatically. This thesis investigates techniques of using automatic translation of models in typical equation based languages, such as Modelica, into parallel simulation code that enable high utilization of available processors in a parallel computer. The two main ideas investigated here are the following: first, to apply parallelization simultaneously to both the system equations and the numerical solver, and secondly. to use software pipelining to further reduce the time processors are kept waiting for the results of other processors. Prototype implementations of the investigated techniques have been developed as a part of the OpenModelica open source compiler for Modelica. The prototype has been used to evaluate the parallelization techniques by measuring the execution time of test models on a few parallel archtectures and to compare the results to sequential code as well as to the results achieved in earlier work. A measured speedup of 6.1 on eight processors on a shared memory machine has been reached. It still remains to evaluate the methods for a wider range of test models and parallel architectures.
Article
Full-text available
ABSTRACT This thesis proposal ,discusses control of dynamic ,systems ,and its relation to time. Although ,much research has been done concerning control of dynamic systems and decision making, little research exists about the relationship between,time and control. Control is defined as the ability to keep a target system/ process in a desired state. In this study, properties of time such as fast, slow, overlapping etc, should be viewed,as a relation between,the variety of a controlling system,and a target system. It is further con- cluded that humans,have great difficulties controlling target systems that have slow responding,processes or "dead" time between,action and response. This thesis proposal suggests two different studies to adress the problem,of human control over slow responding,systems and dead time in organisational control. This work has been supported by the National Defence College Feedforward Control in Dynamic Situations Björn Johansson ISBN91-7373-664-3 ISSN0208-7971
Article
In the given work application of structural methods of defects search for checking design errors has been investigated during verification of digital devices. The structural algorithm of searching design errors is developed. The diagnostic experiment with an HDL-model of a digital device, submitted by a graph model is carried out.