Malcolm Reynolds1, Jozef Dobos1, Leto Peel2, Tim Weyrich1, Gabriel Brostow1
1 University College London
2 Advanced Technology Centre, BAE Systems, Bristol, UK
Time-of-Flight cameras provide high-frame-rate depth measurements within a limited range of distances. These readings can be extremely noisy and display unique errors, for instance, where scenes contain depth discontinuities or materials with low infrared reflectivity. Previous works have treated the amplitude of each Time-of-Flight sample as a measure of confidence. In this paper, we demonstrate the shortcomings of this common lone heuristic, and propose an improved per-pixel confidence measure using a Random Forest regressor trained with real-world data. Using an industrial laser scanner for ground truth acquisition, we evaluate our technique on data from two different Time-of-Flight cameras1. We argue that an improved confidence measure leads to superior reconstructions in subsequent steps of traditional scan processing pipelines. At the same time, data with confidence reduces the need for point cloud smoothing and median filtering.
Malcolm Reynolds, Jozef Dobos, Leto Peel, Tim Weyrich, Gabriel Brostow. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, Colorado Springs, June 2011.Malcolm Reynolds, Joze Doboš, Leto Peel, Tim Weyrich, and Gabriel Brostow. Capturing time-of-flight data with confidence. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, June 2011.Reynolds, M., Doboš, J., Peel, L., Weyrich, T., and Brostow, G. 2011. Capturing time-of-flight data with confidence. InProc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.M. Reynolds, J. Doboš, L. Peel, T. Weyrich, and G. Brostow, “Capturing time-of-flight data with confidence,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2011, pp. 1–8. |
We are grateful to Jan Boehm and Stuart Robson, for sharing their laser scanner and expertise, and to Marc Pollefeys for lending us the SR-3100. Thanks to Maciej Gryka, Oisin Mac Aodha, and Frédeéric Besse who helped during data collection, and to Jim Rehg for valuable discussions. We would also like to thank the reviewers for their feedback and suggestions. The student authors were supported by the UK EPSRC-funded Eng. Doctorate Centre in Virtual Environments, Imaging and Visualisation (EP/G037159/1).