The knowledge of the environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios.
Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inference on low-power embedded hardwares. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios, due to the scarcity of underwater depth data.
Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders an a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, and a feasibility study to predict depth maps over underwater scenarios.
Precisely, we propose the MobileNetV3_S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3_LMin for the 8-bit Edge TPU hardwares.
In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state of the art methods.
The proposed architectures would be considered a promising approach for real-time monocular depth estimation with the aim of improving the environment perception for underwater drones, lightweight robots and internet-of-things.
Dettaglio pubblicazione
2022, IEEE International Workshop on Metrology for the Sea, Pages -
Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios (04b Atto di convegno in volume)
Papa Lorenzo, Russo Paolo, Amerini Irene
Gruppo di ricerca: Computer Vision, Computer Graphics, Deep Learning, Gruppo di ricerca: Theory of Deep Learning
keywords