Concurrent monitoring of operational health in neural networks through balanced output partitions

E Ozen, A Orailoglu - 2020 25th Asia and South Pacific Design …, 2020 - ieeexplore.ieee.org
2020 25th Asia and South Pacific Design Automation Conference (ASP …, 2020ieeexplore.ieee.org
The abundant usage of deep neural networks in safety-critical domains such as autonomous
driving raises concerns regarding the impact of hardware-level faults on deep neural
network computations. As a failure can prove to be disastrous, low-cost safety mechanisms
are needed to check the integrity of the deep neural network computations. We embed
safety checksums into deep neural networks by introducing a custom regularization term in
the network training. We partition the outputs of each network layer into two groups and …
The abundant usage of deep neural networks in safety-critical domains such as autonomous driving raises concerns regarding the impact of hardware-level faults on deep neural network computations. As a failure can prove to be disastrous, low-cost safety mechanisms are needed to check the integrity of the deep neural network computations. We embed safety checksums into deep neural networks by introducing a custom regularization term in the network training. We partition the outputs of each network layer into two groups and guide the network to balance the summation of these groups through an additional penalty term in the cost function. The proposed approach delivers twin benefits. While the embedded checksums deliver low-cost detection of computation errors upon violations of the trained equilibrium during network inference, the regularization term enables the network to generalize better during training by preventing overfitting, thus leading to significantly higher network accuracy.
ieeexplore.ieee.org
Showing the best result for this search. See all results