Design of smart human following on rail inspection using human pose estimation marker-less motion capture based on blazepose

Adiratna Ciptaningrum

Abstract


Advances in artificial intelligence (AI) technology today have a significant impact in various aspects of human life. One example is the evolution of robotics that has achieved the ability to follow human movements. To achieve this, AI technology utilizes image recognition through Computer Vision and the Human Pose Estimation method with the help of the BlazePose library, which is able to recognize 33 keypoints in human body poses. Research in this area aims to develop an automatic control system that can be used on inspection carts, enabling them to follow human body movements while walking. The results showed a detection accuracy rate of 84.82% with an optimal detection distance between 4 to 8 meters from the camera, with an average detection accuracy of 89.862%. On the motor control aspect, the system is set to turn off the motor when the distance between the device and the object is in the range of 1-2 meters, and turn it on at a distance of 3-12 meters. However, it is important to note that the accuracy achieved is greatly affected by the color segmentation capabilities of the software, the lighting conditions in the environment, as well as the resolution of the camera used.


Keywords


inspection train; computer vision; human pose estimation

Full Text:

PDF

References


Agustin, Y. H., & Sugiharto, A. (2017). Implementasi Visi Komputer Untuk Mengidentifikasi Mobilitas Kendaraan Pada Citra Jalan Raya. Jurnal VOI (Voice Of Informatics), 6(2).

Bazarevsky, V., Grishchenko, I., Raveendran, K., Zhu, T., Zhang, F., & Grundmann, M. (2020). Blazepose: On-device real-time body pose tracking. arXiv preprint arXiv:2006.10204.

Cheng, Y., Yi, P., Liu, R., Dong, J., Zhou, D., & Zhang, Q. (2021, May). Human-robot interaction method combining human pose estimation and motion intention recognition. In 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD) (pp. 958-963). IEEE.

Fitriyah, H. (2020). Pengukuran Panjang-Berat Ikan dan Sayuran pada Budikdamber (Budi Daya Ikan dalam Ember) Menggunakan Visi Komputer dan Regresi Linier. Jurnal SISKOM-KB (Sistem Komputer dan Kecerdasan Buatan), 4(1), 8-14.

Grishchenko, I., Bazarevsky, V., Zanfir, A., Bazavan, E. G., Zanfir, M., Yee, R., ... & Sminchisescu, C. (2022). Blazepose ghum holistic: Real-time 3d human landmarks and pose estimation. arXiv preprint arXiv:2206.11678.

Irfan, I., Kartika, K., & Meliala, S. M. S. (2023). PENGIRAAN POSE MODEL MANUSIA PADA REPETISI KEBUGARAN AI PEMOGRAMAN PYTHON BERBASIS KOMPUTERISASI. INFOTECH journal, 9(1), 11-19.

Kharka, K. B., Bhutia, T. R. W., Chettri, L., Subba, R., Giri, E., & Lepcha, S. (2021). Human Following Robot Using Arduino Uno. Open Access, 3(07), 5.

Liu, M., Han, S., & Lee, S. (2016). Tracking-based 3D human skeleton extraction from stereo video camera toward an on-site safety and ergonomic analysis. Construction Innovation, 16(3), 348-367.

Marpaung, F., Aulia, F., & Nabila, R. C. (2022). COMPUTER VISION DAN PENGOLAHAN CITRA DIGITAL.

Raihan, M. J., Hasan, M. T., & Nahid, A. A. (2022). SMART HUMAN FOLLOWING BABY STROLLER USING COMPUTER VISION. Khulna University Studies, 8-18.

Shah, M., Gandhi, K., Pandhi, B. M., Padhiyar, P., & Degadwala, S. (2023, May). Computer Vision & Deep Learning based Realtime and Pre-Recorded Human Pose Estimation. In 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC) (pp. 313-319). IEEE.

Shuai, L., Li, C., Guo, X., Prabhakaran, B., & Chai, J. (2016). Motion capture with ellipsoidal skeleton using multiple depth cameras. IEEE transactions on visualization and computer graphics, 23(2), 1085-1098.

Steven, G., Liliana, L., & Purbowo, A. N. (2022). Penerapan 3D Human Pose Estimation Indoor Area untuk Motion Capture dengan Menggunakan YOLOv4-Tiny, EfficientNet Simple Baseline, dan VideoPose3D. Jurnal Infra, 10(2), 198-204.

Toshev, A., & Szegedy, C. (2014). Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1653-1660).

Wijaya, A. A., & Prayudi, Y. (2010, June). Implementasi Visi Komputer Dan Segmentasi Citra Untuk Klasifikasi Bobot Telur Ayam Ras. In Seminar Nasional Aplikasi Teknologi Informasi (SNATI).

Zhang, C., Kang, K., Li, H., Wang, X., Xie, R., & Yang, X. (2016). Data-driven crowd understanding: A baseline for a large-scale crowd dataset. IEEE Transactions on Multimedia, 18(6), 1048-1061.

Zhao, B., Zhu, W., Hao, S., Hua, M., Liao, Q., Jing, Y., ... & Gu, X. (2023). Prediction heavy metals accumulation risk in rice using machine learning and mapping pollution risk. Journal of Hazardous Materials, 448, 130879.




DOI: https://doi.org/10.52626/joge.v2i2.26

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

 is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

E - ISSN : 2964-2655