Robotic positioning and orientation based on object detection through image recognition using deep convolutional neural networks

Autores

  • Ana Gabriella Amorim Abreu Pereira Instituto de Engenharia Nuclear
  • Cláudio Márcio do Nascimento Abreu Pereira IEN

Resumo

This report presents an approach for robots positioning and orientation, based on image recognition using deep neural networks (DNN) architectures. In 2012, the power of deep learning (DL) has been effectively demonstrated to the world, mainly through to the DNN architectures. In this context image recognition have been considerably improved with the use of deep convolutional neural networks (CNN) [1], which were specifically developed to this aim.

Accuracy in evaluating robot position is a critical feature in autonomous and teleoperated robotics and focus of many investigations. Global positioning systems (GPS) may be used for this task, however in some inhospitable places or indoor operations, it may not work. Until today, efforts have been done to overcome this difficulty using electronic sensors and image recognition [2, 3].

The proposed approach is intended to be used in environments, in which some objects (i.e., symbols, plates, doors, stairs, windows, views, etc) can be recognized and their dimensions and positions are well known. Some CNN based architectures, such as R-CNN [5] and their variants may be used to detect objects. Figure 1 shows a schematic diagram of the R-CNN approach.

Figure 1 – R-CNN approach

The R-CNN architecture starts searching for candidate regions (bounding-boxes) that may contain possible objects and subsequently apply CNN to each bounding-box for evaluating the probability of containing an object, returning the bounding-boxes positions and dimensions together with objects probabilities.

The distance from an object to the camera is evaluated by estimating its dimension relative to the scene. With obtained distances to at least 2 objects, a triangularization algorithm may be used to calculate the robot’s position. Figure 2 illustrates the proposed approach.

Figure 2.  Positioning system approach.

This work is in the very beginning and is supposed to be applied to improve the positioning system of a recently developed robot for radiological measurements, in which the positioning system is, at moment, quite simple and imprecise [5].

References

[1] Krizhevsky, A., Sutskever , I. and Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. International Conference on Neural Information Processing Systems, Lake Tahoe, 3-6 December 2012, 1097-1105.

[2] Wang, C., Xing, L. and Tu, X. A Novel Position and Orientation Sensor for Indoor Navigation Based on Linear CCDs. Sensors. 20, 748, pp.1-15, 2020.

[3] Determining Position and Orientation of 6R Robot using Image Processing Amin Korayem, H., Niyavarani, E., Nekoo, S.R. and Korayem, M.H. Int J. of Advanced Design and Manufacturing Technology, Vol. 11, No. 1, pp.95-103 ,2018.

[4] Ren, S., He, K., Girshick, R. and Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015

[5] Da Silveira, R.R. Protótipo de sistema robótico utilizando hardware livre para monitoração de ambientes radioativos. Dissertação M.Sc., PPGIEN, 2021.

Downloads

Publicado

2021-08-12

Edição

Seção

Complex Systems Engineering