Fault-Tolerant Training of Neural Networks in the Presence of Mos Transistor Mismatches
Loading...
Date
2001
Authors
Öğrenci, Arif Selçuk
Dündar, Günhan
Balkır, Sina
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE-INST Electrical Electronics Engineers Inc
Open Access Color
Green Open Access
Yes
OpenAIRE Downloads
OpenAIRE Views
Publicly Funded
No
Abstract
Analog techniques are desirable for hardware implementation of neural networks due to their numerous advantages such as small size low power and high speed. However these advantages are often offset by the difficulty in the training of analog neural network circuitry. In particular training of the circuitry by software based on hardware models is impaired by statistical variations in the integrated circuit production process resulting in performance degradation. In this paper a new paradigm of noise injection during training for the reduction of this degradation is presented. The variations at the outputs of analog neural network circuitry are modeled based on the transistor-level mismatches occurring between identically designed transistors Those variations are used as additive noise during training to increase the fault tolerance of the trained neural network. The results of this paradigm are confirmed via numerical experiments and physical measurements and are shown to be superior to the case of adding random noise during training.
Description
Keywords
Backpropagation, Neural network hardware, Neural network training, Transistor mismatch, Transistor mismatch, Neural network hardware, Backpropagation, Neural network training
Fields of Science
0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology
Citation
WoS Q
Q1
Scopus Q
Q1

OpenCitations Citation Count
11
Source
IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing
Volume
48
Issue
3
Start Page
272
End Page
281
PlumX Metrics
Citations
CrossRef : 5
Scopus : 13
Captures
Mendeley Readers : 5
Google Scholar™


