scholarly journals On Tractable Representations of Binary Neural Networks

Author(s):  
Weijia Shi ◽  
Andy Shih ◽  
Adnan Darwiche ◽  
Arthur Choi

We consider the compilation of a binary neural network’s decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network’s behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.

1993 ◽  
Vol 03 (01) ◽  
pp. 3-12 ◽  
Author(s):  
DETLEF SIELING ◽  
INGO WEGENER

(Ordered) binary decision diagrams are a powerful representation for Boolean functions and are widely used in logical synthesis, verification, test pattern generation or as part of CAD tools. NC-algorithms are presented for the most important operations on this representation, e.g. evaluation for a given input, minimization, satisfiability, redundancy test, replacement of variables by constants or functions, equivalence test and synthesis. The algorithms have logarithmic run time on CRCW COMMON PRAMs with a polynomial number of processors.


2000 ◽  
Vol 103 (1-3) ◽  
pp. 237-258 ◽  
Author(s):  
Martin Sauerhoff ◽  
Ingo Wegener ◽  
Ralph Werchner

Author(s):  
Kenta Shirane ◽  
Takahiro Yamamoto ◽  
Hiroyuki Tomiyama

In this paper, we present a case study on approximate multipliers for MNIST Convolutional Neural Network (CNN). We apply approximate multipliers with different bit-width to the convolution layer in MNIST CNN, evaluate the accuracy of MNIST classification, and analyze the trade-off between approximate multiplier’s area, critical path delay and the accuracy. Based on the results of the evaluation and analysis, we propose a design methodology for approximate multipliers. The approximate multipliers consist of some partial products, which are carefully selected according to the CNN input. With this methodology, we further reduce the area and the delay of the multipliers with keeping high accuracy of the MNIST classification.


Author(s):  
Anna Louise D. Latour ◽  
Behrouz Babaki ◽  
Siegfried Nijssen

A number of data mining problems on probabilistic networks can be modeled as Stochastic Constraint Optimization and Satisfaction Problems, i.e., problems that involve objectives or constraints with a stochastic component. Earlier methods for solving these problems used Ordered Binary Decision Diagrams (OBDDs) to represent constraints on probability distributions, which were decomposed into sets of smaller constraints and solved by Constraint Programming (CP) or Mixed Integer Programming (MIP) solvers. For the specific case of monotonic distributions, we propose an alternative method: a new propagator for a global OBDD-based constraint. We show that this propagator is (sub-)linear in the size of the OBDD, and maintains domain consistency. We experimentally evaluate the effectiveness of this global constraint in comparison to existing decomposition-based approaches, and show how this propagator can be used in combination with another data mining specific constraint present in CP systems. As test cases we use problems from the data mining literature.


Author(s):  
Roozbeh Zomorodian ◽  
Hiwa Khaledi ◽  
Mohammad Bagher Ghofrani

In this paper, the application of neural networks for simulation and optimization of the cogeneration systems has been presented. CGAM problem, a benchmark in cogeneration systems, is chosen as a case study. Thermodynamic model includes precise modeling of the whole plant. For simulation of the steady sate behavior, the static neural network is applied. Then using dynamic neural network, plant is optimized thermodynamically. Multi layer feed forward neural networks is chosen as static net and recurrent neural networks as dynamic net. The steady state behavior of CGAM problem is simulated by MFNN. Subsequently, it is optimized by dynamic net. Results of static net have excellence agreement with simulator data. Dynamic net shows that in thermodynamic optimization condition, σ and pinch point temperature difference have the lowest value, while CPR reaches a high value. Sensitivity study shows turbomachinery efficiencies have the highest effect on the performance of the system in optimum condition.


2001 ◽  
Vol 11 (05) ◽  
pp. 489-496
Author(s):  
AN-PIN CHEN ◽  
CHIEH-YOW CHIANGLIN ◽  
HISU-PEI CHUNG

This paper applies the neural network method to establish an index arbitrage model and compares the arbitrage performances to that from traditional cost of carry arbitrage model. From the empirical results of the Nikkei 225 stock index market, following conclusions can be stated: (1) The basis will get enlarged for a time period, more profitability may be obtained from the trend. (2) If the neural network is applied within the index arbitrage model, twofold of return would be obtained than traditional arbitrage model can do. (3) If the T_basis has volatile trend, the neural network arbitrage model will ignore the peak. Although arbitrageur would lose the chance to get profit, they may reduce the market impact risk.


Sign in / Sign up

Export Citation Format

Share Document