Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations

M Laskey, C Chuck, J Lee, J Mahler… - … on Robotics and …, 2017 - ieeexplore.ieee.org
2017 IEEE International Conference on Robotics and Automation (ICRA), 2017ieeexplore.ieee.org
Motivated by recent advances in Deep Learning for robot control, this paper considers two
learning algorithms in terms of how they acquire demonstrations from fallible human
supervisors. Human-Centric (HC) sampling is a standard supervised learning algorithm,
where a human supervisor demonstrates the task by teleoperating the robot to provide
trajectories consisting of state-control pairs. Robot-Centric (RC) sampling is an increasingly
popular alternative used in algorithms such as DAgger, where a human supervisor observes …
Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations from fallible human supervisors. Human-Centric (HC) sampling is a standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. Robot-Centric (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot execute a learned policy and provides corrective control labels for each state visited. We suggest RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable to RC when applied to expressive learning models such as deep learning and hyper-parametric decision trees, which can achieve very low training error provided there is enough data. We compare HC and RC using a grid world environment and a physical robot singulation task. In the latter the input is a binary image of objects on a planar worksurface and the policy generates a motion in the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that using deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples in which at the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. These results suggest a form of HC sampling may be preferable for highly-expressive learning models and human supervisors.
ieeexplore.ieee.org
Showing the best result for this search. See all results