ArticlePDF Available

MIS: A Multiple-Level Logic Optimization System

Authors:

Abstract

MIS is both an interactive and a batch-oriented multilevel logic synthesis and minimization system. MIS starts from the combinational logic extracted, typically, from a high-level description of a macrocell. It produces a multilevel set of optimized logic equations preserving the input-output behavior. The system includes both fast and slower (but more optimal) versions of algorithms for minimizing the area, and global timing optimization algorithms to meet system-level timing constraints. This paper provides an overview of the system and a description of the algorithms used. Included are some examples illustrating an input language used for specifying logic and don't-cares. Parts on an industrial chip have been re-synthesized using MIS with favorable results as compared to equivalent manual designs.
... L OGIC synthesis plays an important role in modern electronic design automation flows, optimizing gatelevel netlists and removing redundant logic in them [1]- [3]. Peephole optimization is a divide-and-conquer strategy to maintain scalability of logic synthesis algorithms, where small portions of a circuit, often referred to as windows or cuts, are extracted, optimized independently, and substituted back. ...
... Resubstitution is a logic optimization technique which substitutes a node in the network with another existing node, or with newly-created nodes constructed upon other existing nodes [3]. Resubstitution for AIG size minimization was first proposed in [4], where windows of no more than 16 inputs are constructed to collect structurally-proximate divisor nodes and to perform complete local simulation. ...
... The original target function is thus realized by g 1 ∨ (¬g 13 ∧ ¬g 14 ). 3 In contrast, if a negative unate literal l 2 is chosen, the target onset stays the same, whereas a new offset f ′ on = f off ∧ ¬l 2 is derived. The dependency circuit is then constructed with an AND gate with negated fanins on top. ...
Article
Full-text available
Logic resynthesis is one of the core problems in modern peephole logic optimization algorithms. Given a target function and a set of existing functions, logic resynthesis asks for a circuit reusing some of the existing functions and generating the target. While exact methods such as enumeration and SATbased synthesis guarantee optimal solutions, limitations on the problem size are inevitable due to scalability concerns. In this work, we propose heuristic resynthesis algorithms for ANDbased, majority-based, and multiplexer-based circuits, which are scalable in all aspects. Used as the core of high-effort optimization, our heuristic resynthesis algorithms play a key role in enabling 2-3% further size reduction on benchmarks that are already processed by state-of-the-art optimization flows.
Article
Full-text available
Objectives. The problem of choosing the best methods and programs for circuit implementation as part of digital ASIC (Application-Specific Integrated Circuit) sparse systems of disjunctive normal forms (DNF) of completely defined Boolean functions is considered. For matrix forms of sparse DNF systems, the ternary matrix specifying elementary conjunctions contains a large proportion of undefined values corresponding to missing literals of Boolean input variables, and the Boolean matrix specifying the occurrences of conjunctions in DNF functions contains a large proportion of zero values. Methods. It is proposed to investigate various methods of technologically independent logical optimization performed at the first stage of logical synthesis: joint minimization of systems of functions in the DNF class, separate and joint minimization in classes of multilevel representations in the form of Boolean networks and BDD representations using mutually inverse cofactors, as well as the division of a system of functions into subsystems with a limited number of input variables and the method of block cover of DNF systems, focused on minimizing the total area of the blocks forming the cover. Results. When implementing sparse DNF systems of Boolean functions in ASIC, along with traditional methods of joint minimization of systems of functions in the DNF class, methods for optimizing multilevel representations of Boolean function systems based on Shannon expansions can be used for technologically independent optimization, while separate minimization and joint minimization of the entire system as a whole turn out to be less effective compared with block partitions and coatings of the DNF system and subsequent minimization of multilevel representations. Schemes obtained as a result of synthesis using minimized representations of Boolean networks often have a smaller area than schemes obtained using minimized BDD representations. Conclusion. For the design of digital ASIC, the effectiveness of combined approach is shown, when initially the block coverage programs of the DNF system is used, followed by the use of programs to minimize multilevel block representations in the form of Boolean networks minimized based on Shannon expansion.
Article
Full-text available
The results of experimental studies of the effectiveness of programs for minimizing multilevel representations of systems of Boolean functions performed during the synthesis of combinational circuits in the design library of custom CMOS VLSI are presented. The efficiency of joint and separate minimization programs using Shannon expansions for constructing multilevel representations of systems of fully defined Boolean functions in the form of interconnected systems of logical equations — complete or abbreviated formulas of Shannon expansion or formulas corresponding to Boolean networks is investigated. A comparison is made with the results of experiments performed for various types of joint minimization only. A comparison is also made with the results of solutions obtained by the program of joint and separate minimization of Boolean function systems in the DNF class. It is shown that the use of joint minimization of multi-level representations makes it possible to obtain circuits of a smaller area more often, and the use of a separate one — circuits with a lower delay.
Article
With the rise of data-intensive applications, traditional computing paradigms have hit the memory-wall. In-memory computing using emerging nonvolatile memory (NVM) technology is a promising solution strategy to overcome the limitations of the von-Neumann architecture. In-memory computing using NVM devices has been explored in both analog and digital domains. Analog in-memory computing can perform matrix–vector multiplication (MVM) in an extremely energy-efficient manner. However, analog in-memory computing is prone to errors and resulting precision is therefore low. On the contrary, digital in-memory computing is a viable option for accelerating scientific computations that require deterministic precision. In recent years, several digital in-memory computing styles have been proposed. Unfortunately, state-of-the-art digital in-memory computing styles rely on repeated WRITE operations which involve switching of NVM devices. WRITE operations in NVM cells are expensive in terms of energy, latency, and device endurance. In this article, we propose a READ-based in-memory computing framework called STREAM. The framework performs streaming-based data processing for data-intensive applications. The STREAM framework consists of a synthesis tool that decomposes an arbitrary Boolean function into in-memory compute kernels. Two synthesis approaches are proposed to generate READ-based in-memory compute kernels using data structures from logic synthesis. A hardware/software co-design technique is developed to minimize the intercrossbar data communication. The STREAM framework is evaluated using circuits from the ISCAS85 benchmark suite, and Suite-Sparse applications to scientific computing. Compared with state-of-the-art in-memory computing framework, the proposed framework improves latency and energy performance with up to $200 \times $ and $20\times $ , respectively.
Article
Full-text available
An approach is described for the minimization of multilevel logic circuits. A multilevel representation of a block of combinational logic is defined, called a Boolean network. A procedure is then proposed, called ESPRESSOMLD, to transform a given Boolean network into a prime, irredundant, and R-minimal form. This procedure rests on the extension of the notions of primality and irredundancy, previously used only for two-level logic minimization, to combinational multilevel logic circuits. The authors introduce the concept of R-minimality, which implies minimality with respect to cube reshaping, and demonstrate the crucial role played by this concept in multilevel minimization. Theorems are given that prove the correctness of the proposed procedure. Finally, it is shown that prime and irredundant multilevel logic circuits are 100% testable for input and output single-stuck faults, and that these tests are provided as a byproduct of the minimization
Article
A novel approach is provided for problems of factorization and decomposition of logic functions, which are an important part of multilevel logic optimization. The key concept is to formulate these problems as rectangular covering problems. This approach has the potential to provide better solutions as it relaxes some of the constraints of a purely algebraic approach. An algorithm to solve the rectangular covering problem is presented, and both factorization and common-cube extraction are formulated as rectangular covering problems.
Article
A description is given of SPUR (Symbolic Processing Using RISCs), a multiprocessor workstation being developed at the University of California at Berkeley to conduct parallel processing research. Its development is part of a multi-year effort to study hardware and software issues in multiprocessing in general, and parallel processing in Lisp in particular. The focus is on the initial architectural research and development of SPUR. The memory system, CPU, and floating-point coprocessor are examined in detail.
Book
1. Introduction.- 1.1 Design Styles for VLSI Systems.- 1.2 Automatic Logic Synthesis.- 1.3 PLA Implementation.- 1.4 History of Logic Minimization.- 1.5 ESPRESSO-II.- 1.6 Organization of the Book.- 2. Basic Definitions.- 2.1 Operations on Logic Functions.- 2.2 Algebraic Representation of a Logic Function.- 2.3 Cubes and Covers.- 3. Decomposition and Unate Functions.- 3.1 Cofactors and the Shannon Expansion.- 3.2 Merging.- 3.3 Unate Functions.- 3.4 The Choice of the Splitting Variable.- 3.5 Unate Complementation.- 3.6 SIMPLIFY.- 4. The ESPRESSO Minimization Loop and Algorithms.- 4.0 Introduction.- 4.1 Complementation.- 4.2 Tautology.- 4.2.1 Vanilla Recursive Tautology.- 4.2.2 Efficiency Results for Tautology.- 4.2.3 Improving the Efficiency of Tautology.- 4.2.4 Tautology for Multiple-Output Functions.- 4.3 Expand.- 4.3.1 The Blocking Matrix.- 4.3.2 The Covering Matrix.- 4.3.3 Multiple-Output Functions.- 4.3.4 Reduction of the Blocking and Covering Matrices.- 4.3.5 The Raising Set and Maximal Feasible Covering Set.- 4.3.6 The Endgame.- 4.3.7 The Primality of c+.- 4.4 Essential Primes.- 4.5 Irredundant Cover.- 4.6 Reduction.- 4.6.1 The Unate Recursive Paradigm for Reduction.- 4.6.2 Establishing the Recursive Paradigm.- 4.6.3 The Unate Case.- 4.7 Lastgasp.- 4.8 Makesparse.- 4.9 Output Splitting.- 5. Multiple-Valued Minimization.- 6. Experimental Results.- 6.1 Analysis of Raw Data for ESPRESSO-IIAPL.- 6.2 Analysis of Algorithms.- 6.3 Optimality of ESPRESSO-II Results.- 7. Comparisons and Conclusions.- 7.1 Qualitative Evaluation of Algorithms of ESPRESSO-II.- 7.2 Comparison with ESPRESSO-IIC.- 7.3 Comparison of ESPRESSO-II with Other Programs.- 7.4 Other Applications of Logic Minimization.- 7.5 Directions for Future Research.- References.