Efficient search for inputs causing high floating-point errors

WF Chiang, G Gopalakrishnan, Z Rakamaric… - Proceedings of the 19th …, 2014 - dl.acm.org
Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of …, 2014dl.acm.org
Tools for floating-point error estimation are fundamental to program understanding and
optimization. In this paper, we focus on tools for determining the input settings to a floating
point routine that maximizes its result error. Such tools can help support activities such as
precision allocation, performance optimization, and auto-tuning. We benchmark current
abstraction-based precision analysis methods, and show that they often do not work at
scale, or generate highly pessimistic error estimates, often caused by non-linear operators …
Tools for floating-point error estimation are fundamental to program understanding and optimization. In this paper, we focus on tools for determining the input settings to a floating point routine that maximizes its result error. Such tools can help support activities such as precision allocation, performance optimization, and auto-tuning. We benchmark current abstraction-based precision analysis methods, and show that they often do not work at scale, or generate highly pessimistic error estimates, often caused by non-linear operators or complex input constraints that define the set of legal inputs. We show that while concrete-testing-based error estimation methods based on maintaining shadow values at higher precision can search out higher error-inducing inputs, suit able heuristic search guidance is key to finding higher errors. We develop a heuristic search algorithm called Binary Guided Random Testing (BGRT). In 45 of the 48 total benchmarks, including many real-world routines, BGRT returns higher guaranteed errors. We also evaluate BGRT against two other heuristic search methods called ILS and PSO, obtaining better results.
ACM Digital Library
Showing the best result for this search. See all results