Fig 2 - uploaded by L. C. Wang
Content may be subject to copyright.
Examples of Boolean learnability

Examples of Boolean learnability

Source publication
Conference Paper
Full-text available
In simulation-based functional verification, composing and debugging testbenches can be tedious and time-consuming. A simulation data-mining approach, called TTPG (C. Wen, L-C Wang et al., 2005), was proposed as an alternative for functional test pattern generation. However, the core of simulation data-mining approach is Boolean learning, which tri...

Contexts in source publication

Context 1
... better understanding how difficult the Boolean learning problem is, a learning algorithm based on logic optimization was first proposed to be applied on Boolean circuits in [3]. Authors use three ISCAS85 example circuits, c432, c499 and c880, to illustrate different levels of difficulty on Boolean learnability in Figure 2. X-axis represents the number of used patterns in the learning models while Y-axis represents the learning accuracy based on another set of randomly-generated patterns. ...
Context 2
... use three ISCAS85 example circuits, c432, c499 and c880, to illustrate different levels of difficulty on Boolean learnability in Figure 2. X-axis represents the number of used patterns in the learning models while Y-axis represents the learning accuracy based on another set of randomly-generated patterns. Figure 2(a) shows that c432 is easy to learn while Figure 2(b) shows that learning c499 is not effective and the effectiveness of learning c880 lies in between. Note that how effective the Boolean learning is will decide the success of the TTPG methodology. ...
Context 3
... use three ISCAS85 example circuits, c432, c499 and c880, to illustrate different levels of difficulty on Boolean learnability in Figure 2. X-axis represents the number of used patterns in the learning models while Y-axis represents the learning accuracy based on another set of randomly-generated patterns. Figure 2(a) shows that c432 is easy to learn while Figure 2(b) shows that learning c499 is not effective and the effectiveness of learning c880 lies in between. Note that how effective the Boolean learning is will decide the success of the TTPG methodology. ...
Context 4
... report the justification success rates in Table X. It is interesting to note that in Figure 2(b), c499 is classified as low learnability in [3]. With the help of ARM-based orderings, we can justify all outputs of c499 with success rate of 96.6% as k = 100. ...

Similar publications

Article
Full-text available
Single nanopore is a powerful platform to detect, discriminate and identify biomacromolecules. Among the different devices, the conical nanopores obtained by the track-etched technique on a polymer film are stable and easy to functionalize. However, these advantages are hampered by their high aspect ratio that avoids the discrimination of similar s...
Chapter
Full-text available
We have presented a novel theory for grasping based on the objects and their functionality. When the object is selected for the user, it is associated with more parameters, which we describe below. After the user chooses the task, the virtual human, if the object is feasible, grasps with the type of grasp calculated as a function of the mathematica...
Chapter
Full-text available
Liver disease (LD) is a common disease in the world. The functionality of the liver is very crucial in the human body where it impacts much physical functionality like the manufacture of protein, Metabolism of iron and sugar, and blood clotting. In the present decade, the research on prediction and prevention of LD with Data Mining and artificial i...
Article
Full-text available
Cloud services are prominent within the private, public and commercial domains. Many of these services are expected to be always on and have a critical nature; therefore, security and resilience are increasingly important aspects. In order to remain resilient, a cloud needs to possess the ability to react not only to known threats, but also to new...
Conference Paper
Full-text available
This paper presents a new neural network method used to perform visual pattern classification. The neural net- work is called I-PyraNet which is a hybrid implementation of PyraNet and the concepts of inhibitory fields. In order to improve the results obtained by this neural network, it is also presented the 2-D Gabor filter. Furthermore, both the n...

Citations

... A data-mining engine based on a decision-diagram based learning approach was first proposed in [6]. The authors show that learning accuracy of their approach is comparable to other state-of-the-art learning techniques. ...
... Mathematical analysis is provided to demonstrate the power of ensemble learning. Then we will discuss each constituent technique and conclude the effectiveness of OBDF learning by comparison with the previous algorithms in [6]. ...
... Our data learning algorithm resolves these two concerns by constructing an ordered-binary-decision-forest (OBDF). It employs the bootstrap method from [17], followed by ordered nearest neighbor(ONN) learning from [6] to construct individual learners. Then the out-of-bag evaluation helps us decide the weight for each OBDD learner. ...
Conference Paper
Full-text available
Unit-level verification is a critical step to the success of full-chip functional verification for microprocessor designs. In the unit-level verification, a unit is first embedded in a complex software that emulates the behavior of surrounding units, and then a sequence of stimuli is applied to measure the functional coverage. In order to generate such a sequence, designers need to comprehend the relationship between boundaries at the unit under verification and at the inputs to the emulation software. However, figuring out this relationship can be very difficult. Therefore, this paper proposes an incremental learning framework that incorporates an ordered-binary-decision-forest(OBDF) algorithm, to automate estimating the controllability of unit-level signals and to provide full-chip level information for designers to govern these signals. Mathematical analysis shows that the proposed OBDF algorithm has lower model complexity and lower error variance than the previous algorithms. Meanwhile, a commercial microprocessor core is also applied to demonstrate that controllability of input signals on the load/store unit in the microprocessor core can be estimated automatically and information about how to govern these signals can also be extracted successfully.
... In many cases, both directed and constrained random tests are deployed to meet coverage goals. Wen et al. [7] discuss a systematic way to justify test patterns generated automatically. To ease RTL debugging, Hsu et al. [17] extract a behavioral model out of an RTL description combined with a simulation trace and provide an interface to query, trace and assign a value to an arbitrary node. ...
Conference Paper
Transaction-level modeling (TLM) allows a designer to save functional verification effort during the modular refinement of an SoC by reusing the prior implementation of a module as a golden model for state inconsistency detection. One problem in simulation-based verification is the performance and bandwidth overhead of state dump and comparison between two models. In this paper, we propose an efficient fine-grain state inconsistency detection technique that checks the consistency of two states of arbitrary size at sub- transaction (tick) granularity using incremental hashes. At each tick, the hash generates a signature of the entire state, which can be efficiently updated and compared. We evaluate the proposed signature scheme with a FIR filter and a Vorbis decoder and show that very fine-grain state consistency checking is feasible. The hash signature checking increases execution time of Bluespec RTL simulation by 1.2% for the FIR filter and by 2.2% for the Verbis decoder while correctly detecting any injected state inconsistency.
Article
We propose a methodology to generate input stimulus to achieve coverage closure using GoldMine, an automatic assertion generation engine that uses data mining and formal verification. GoldMine mines the simulation traces of a behavioral register transfer level (RTL) design using a decision tree based learning algorithm to produce candidate assertions. These candidate assertions are passed to a formal verification engine. If a candidate assertion is false, a counterexample trace is generated. In this paper, we feed these counterexample traces to iteratively refine the original simulation trace data. We introduce an incremental decision tree to mine the new traces in each iteration. The algorithm converges when all the candidate assertions are true. We formally prove that our algorithm will always converge and capture the complete functionality of each output of a sequential design on convergence. We show that our method always results in a monotonic increase in simulation coverage. We also present an output-centric notion of coverage, and argue that we can attain coverage closure with respect to this notion of coverage. We elaborate the technique step by step using a nontrivial arbiter design. Experimental results to validate our arguments are presented on several designs from Rigel, OpenRisc, and SpaceWire. Some practical limitations to achieve 100% coverage and the differences between final decision tree and binary decision diagram are discussed.