Gagandeep Singh's research while affiliated with University of Illinois, Urbana-Champaign and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (11)


Input-Relational Verification of Deep Neural Networks
  • Article

June 2024

·

3 Reads

Proceedings of the ACM on Programming Languages

Debangshu Banerjee

·

Changming Xu

·

Gagandeep Singh

We consider the verification of input-relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations, monotonicity, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. We introduce a novel concept of difference tracking to compute the difference between the outputs of two executions of the same DNN at all layers. We design a new abstract domain, DiffPoly for efficient difference tracking that can scale large DNNs. DiffPoly is equipped with custom abstract transformers for common activation functions (ReLU, Tanh, Sigmoid, etc.) and affine layers and can create precise linear cross-execution constraints. We implement an input-relational verifier for DNNs called RaVeN which uses DiffPoly and linear program formulations to handle a wide range of input-relational properties. Our experimental results on challenging benchmarks show that by leveraging precise linear constraints defined over multiple executions of the DNN, RaVeN gains substantial precision over baselines on a wide range of datasets, networks, and input-relational properties.

Share

Relational DNN Verification With Cross Executional Bound Refinement
  • Preprint
  • File available

May 2024

·

1 Read

We focus on verifying relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations (UAP), certified worst-case hamming distance for binary string classifications, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. However, most of the existing works in DNN verification only handle properties defined over single executions and as a result, are imprecise for relational properties. Though few recent works for relational DNN verification, capture linear dependencies between the inputs of multiple executions, they do not leverage dependencies between the outputs of hidden layers producing imprecise results. We develop a scalable relational verifier RACoon that utilizes cross-execution dependencies at all layers of the DNN gaining substantial precision over SOTA baselines on a wide range of datasets, networks, and relational properties.

Download

Synthesizing Precise Static Analyzers for Automatic Differentiation

October 2023

·

1 Read

·

2 Citations

Proceedings of the ACM on Programming Languages

We present Pasado, a technique for synthesizing precise static analyzers for Automatic Differentiation. Our technique allows one to automatically construct a static analyzer specialized for the Chain Rule, Product Rule, and Quotient Rule computations for Automatic Differentiation in a way that abstracts all of the nonlinear operations of each respective rule simultaneously. By directly synthesizing an abstract transformer for the composite expressions of these 3 most common rules of AD, we are able to obtain significant precision improvement compared to prior works which compose standard abstract transformers together suboptimally. We prove our synthesized static analyzers sound and additionally demonstrate the generality of our approach by instantiating these AD static analyzers with different nonlinear functions, different abstract domains (both intervals and zonotopes) and both forward-mode and reverse-mode AD. We evaluate Pasado on multiple case studies, namely soundly computing bounds on a neural network’s local Lipschitz constant, soundly bounding the sensitivities of financial models, certifying monotonicity, and lastly, bounding sensitivities of the solutions of differential equations from climate science and chemistry for verified ranges of initial conditions and parameters. The local Lipschitz constants computed by Pasado on our largest CNN are up to 2750× more precise compared to the existing state-of-the-art zonotope analysis. The bounds obtained on the sensitivities of the climate, chemical, and financial differential equation solutions are between 1.31 − 2.81× more precise (on average) compared to a state-of-the-art zonotope analysis.


Incremental Verification of Neural Networks

June 2023

·

3 Reads

·

3 Citations

Proceedings of the ACM on Programming Languages

Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress to improve the scalability of complete verifiers over the years on individual DNNs, they are inherently inefficient when a deployed DNN is updated to improve its inference speed or accuracy. The inefficiency is because the expensive verifier needs to be run from scratch on the updated DNN. To improve efficiency, we propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms. Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.


Incremental Randomized Smoothing Certification

May 2023

·

2 Reads

Shubham Ugare

·

Tarun Suresh

·

Debangshu Banerjee

·

[...]

·

Randomized smoothing-based certification is an effective approach for obtaining robustness certificates of deep neural networks (DNNs) against adversarial attacks. This method constructs a smoothed DNN model and certifies its robustness through statistical sampling, but it is computationally expensive, especially when certifying with a large number of samples. Furthermore, when the smoothed model is modified (e.g., quantized or pruned), certification guarantees may not hold for the modified DNN, and recertifying from scratch can be prohibitively expensive. We present the first approach for incremental robustness certification for randomized smoothing, IRS. We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples. IRS significantly reduces the computational cost of certifying modified DNNs while maintaining strong robustness guarantees. We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.


Fig. 1. Workflow of IVAN from left to right. í µí°¼í µí±‰ í µí°´í µí± takes the original network í µí± , input specification í µí¼™ and output specification í µí¼“ . It is built on top of a BaB-based complete verifier that utilizes an analyzer í µí°´forµí°´for the bounding, and heuristic í µí°» for branching. IVAN refines a specification tree í µí±‡ í µí± í µí±“ , result of verifying í µí± , to create
Fig. 3. Steps in Branch and Bound algorithm for complete verification of í µí± . The nodes are labeled with a name and the LB í µí± (í µí±›). The nodes in the specification tree are annotated with their specifications. The edges are labeled with the branching predicates. Each step in BaB partitions unsolved specifications in í µí±‡ í µí± í µí±– into specification splits in í µí±‡ í µí± í µí±–+1 . The proof is complete when all specification splits corresponding to the leaf nodes are solved.
Fig. 4. BaB specification tree for various techniques proposed for incremental verification.
Fig. 5. IVAN removes the ineffective split í µí±Ÿ 1 at í µí±› 0 and construct a new specification tree í µí±‡ P .
Fig. 6. IVAN speedup for the verification of local robustness properties on FCN-MNIST .

+8

Incremental Verification of Neural Networks

April 2023

·

53 Reads

Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress to improve the scalability of complete verifiers over the years on individual DNNs, they are inherently inefficient when a deployed DNN is updated to improve its inference speed or accuracy. The inefficiency is because the expensive verifier needs to be run from scratch on the updated DNN. To improve efficiency, we propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms. Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.


Interpreting Robustness Proofs of Deep Neural Networks

January 2023

·

50 Reads

In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features.


A general construction for abstract interpretation of higher-order automatic differentiation

October 2022

·

8 Reads

·

7 Citations

Proceedings of the ACM on Programming Languages

We present a novel, general construction to abstractly interpret higher-order automatic differentiation (AD). Our construction allows one to instantiate an abstract interpreter for computing derivatives up to a chosen order. Furthermore, since our construction reduces the problem of abstractly reasoning about derivatives to abstractly reasoning about real-valued straight-line programs, it can be instantiated with almost any numerical abstract domain, both relational and non-relational. We formally establish the soundness of this construction. We implement our technique by instantiating our construction with both the non-relational interval domain and the relational zonotope domain to compute both first and higher-order derivatives. In the latter case, we are the first to apply a relational domain to automatic differentiation for abstracting higher-order derivatives, and hence we are also the first abstract interpretation work to track correlations across not only different variables, but different orders of derivatives. We evaluate these instantiations on multiple case studies, namely robustly explaining a neural network and more precisely computing a neural network’s Lipschitz constant. For robust interpretation, first and second derivatives computed via zonotope AD are up to 4.76× and 6.98× more precise, respectively, compared to interval AD. For Lipschitz certification, we obtain bounds that are up to 11,850× more precise with zonotopes, compared to the state-of-the-art interval-based tool.


Training Certifiably Robust Neural Networks Against Semantic Perturbations

July 2022

·

84 Reads

Semantic image perturbations, such as scaling and rotation, have been shown to easily deceive deep neural networks (DNNs). Hence, training DNNs to be certifiably robust to these perturbations is critical. However, no prior work has been able to incorporate the objective of deterministic semantic robustness into the training procedure, as existing deterministic semantic verifiers are exceedingly slow. To address these challenges, we propose Certified Semantic Training (CST), the first training framework for deterministic certified robustness against semantic image perturbations. Our framework leverages a novel GPU-optimized verifier that, unlike existing works, is fast enough for use in training. Our results show that networks trained via CST consistently achieve both better provable semantic robustness and clean accuracy, compared to networks trained via baselines based on existing works.


Proof transfer for fast certification of multiple approximate neural networks

April 2022

·

8 Reads

·

7 Citations

Proceedings of the ACM on Programming Languages

Developers of machine learning applications often apply post-training neural network optimizations, such as quantization and pruning, that approximate a neural network to speed up inference and reduce energy consumption, while maintaining high accuracy and robustness. Despite a recent surge in techniques for the robustness verification of neural networks, a major limitation of almost all state-of-the-art approaches is that the verification needs to be run from scratch every time the network is even slightly modified. Running precise end-to-end verification from scratch for every new network is expensive and impractical in many scenarios that use or compare multiple approximate network versions, and the robustness of all the networks needs to be verified efficiently. We present FANC, the first general technique for transferring proofs between a given network and its multiple approximate versions without compromising verifier precision. To reuse the proofs obtained when verifying the original network, FANC generates a set of templates – connected symbolic shapes at intermediate layers of the original network – that capture the proof of the property to be verified. We present novel algorithms for generating and transforming templates that generalize to a broad range of approximate networks and reduce the verification cost. We present a comprehensive evaluation demonstrating the effectiveness of our approach. We consider a diverse set of networks obtained by applying popular approximation techniques such as quantization and pruning on fully-connected and convolutional architectures and verify their robustness against different adversarial attacks such as adversarial patches, L 0 , rotation and brightening. Our results indicate that FANC can significantly speed up verification with state-of-the-art verifier, DeepZ by up to 4.1x.


Citations (3)


... To improve the efficiency of the certification of g p one can incrementally certify it by reusing the parts of certification of g. Indeed, recent works have developed incremental certification techniques based on formal logic reasoning , Wei and Liu, 2021, Ugare et al., 2023 for improving the certification efficiency of f p by reusing the certification of f . However, these techniques perform incremental deterministic certification that cannot scale to high-dimensional inputs e.g., ImageNet, or state-of-the-art models, e.g. ...

Reference:

Incremental Randomized Smoothing Certification
Incremental Verification of Neural Networks
  • Citing Article
  • June 2023

Proceedings of the ACM on Programming Languages

... For dynamic variables, in addition to the assigned variable, Diamont updates its interval using the uncertain interval arithmetic defined in Fig. 5. The calc-eps function is used to calculate an expression's maximum error by propagating the accompanying error ε through sub-expressions, similarly to how automatic differentiation propagates dual numbers through arithmetic expressions [32,33]. The confidence in this maximum error is then computed using calc-del (ρ(e) returns the list of variables used in an expression e.) To avoid any assumptions about the independence of the uncertainties (unlike the strict independence assumptions of [7]), Diamont uses the conservative union bound. ...

A general construction for abstract interpretation of higher-order automatic differentiation
  • Citing Article
  • October 2022

Proceedings of the ACM on Programming Languages

... For dynamic variables, in addition to the assigned variable, Diamont updates its interval using the uncertain interval arithmetic defined in Fig. 5. The calc-eps function is used to calculate an expression's maximum error by propagating the accompanying error ε through sub-expressions, similarly to how automatic differentiation propagates dual numbers through arithmetic expressions [32,33]. The confidence in this maximum error is then computed using calc-del (ρ(e) returns the list of variables used in an expression e.) To avoid any assumptions about the independence of the uncertainties (unlike the strict independence assumptions of [7]), Diamont uses the conservative union bound. ...

A dual number abstraction for static analysis of Clarke Jacobians
  • Citing Article
  • January 2022

Proceedings of the ACM on Programming Languages