Author ORCID Identifier

https://orcid.org/0009-0009-8841-3478

Date Available

12-10-2025

Year of Publication

2025

Document Type

Master's Thesis

Degree Name

Master of Science in Mining Engineering (MSMIE)

College

Engineering

Department/School/Program

Mining Engineering

Faculty

Pedram Roghanchi

Faculty

Steven Schafrik

Abstract

Underground mining environments present several safety challenges due to low visibility, poor lighting, and the absence of GPS coverage. These factors hinder situational awareness and delay hazard detection, particularly during post-disaster conditions. This research introduces an integrated framework that combines image processing and machine learning techniques applied to 360-degree camera and LiDAR data to enable automated object detection and remote monitoring in underground mines.

The study was divided into two main stages: 2D object detection and 3D object detection. The 2D component employed images captured with a Ricoh Theta Z1 360° camera and utilized YOLO-based deep learning models for object identification.

The 3D component used LiDAR point clouds from the Ouster OS1-070-64 sensor to perform object recognition, registration, and position-shift analysis. Iterative Closest Point (ICP) alignment, statistical denoising, and surface reconstruction were applied to ensure geometric accuracy as well as YOLO-based deep learning models for object identification.

Together, the 2D and 3D frameworks provide a foundation for automated, data-driven monitoring and remote hazard assessment in GPS-denied environments, advancing safety and decision-making in underground mining operations.

Digital Object Identifier (DOI)

https://doi.org/10.13023/etd.2025.532

Funding Information

This study is funded by the National Institute for Occupational Safety and Health under the award no.:U600H012350 in 2023.

Share

COinS