Author ORCID Identifier

https://orcid.org/0000-0003-3705-3686

Year of Publication

2020

Degree Name

Doctor of Philosophy (PhD)

Document Type

Doctoral Dissertation

College

Engineering

Department

Electrical and Computer Engineering

First Advisor

Dr. Yuming Zhang

Abstract

Welding is an important joining technique that has been automated/robotized. In automated/robotic welding applications, however, the parameters are preset and are not adaptively adjusted to overcome unpredicted disturbances, which cause these applications to not be able to meet the standards from welding/manufacturing industry in terms of quality, efficiency, and individuality. Combining information sensing and processing with traditional welding techniques is a significant step toward revolutionizing the welding industry. In practical welding, the weld penetration as measured by the back-side bead width is a critical factor when determining the integrity of the weld produced. However, the back-side bead width is difficult to be directly monitored during manufacturing because it occurs underneath the surface of the welded workpiece. Therefore, predicting back-side bead width based on conveniently sensible information from the welding process is a fundamental issue in intelligent welding.

Traditional research methods involve an indirect process that includes defining and extracting key characteristic information from the sensed data and building a model to predict the target information from the characteristic information. Due to a lack of feature information, the cumulative error of the extracted information and the complex sensing process directly affect prediction accuracy and real-time performance. An end-to-end, data-driven prediction system is proposed to predict the weld penetration status from top-side images during welding. In this method, a passive-vision sensing system with two cameras to simultaneously monitor the top-side and back-bead information is developed. Then the weld joints are classified into three classes (i.e., under penetration, desirable penetration, and excessive penetration) according to the back-bead width. Taking the weld pool-arc images as inputs and corresponding penetration statuses as labels, an end-to-end convolutional neural network (CNN) is designed and trained so the features are automatically defined and extracted.

In order to increase accuracy and training speed, a transfer learning approach based on a residual neural network (ResNet) is developed. This ResNet-based model is pre-trained on an ImageNet dataset to process a better feature-extracting ability, and its fully connected layers are modified based on our own dataset. Our experiments show that this transfer learning approach can decrease training time and improve performance. Furthermore, this study proposes that the present weld pool-arc image is fused with two previous images that were acquired 1/6s and 2/6s earlier. The fused single image thus reflects the dynamic welding phenomena, and prediction accuracy is significantly improved with the image-sequence data by fusing temporal information to the input layer of the CNN (early fusion). Due to the critical role of weld penetration and the negligible impact on system implementation, this method represents major progress in the field of weld-penetration monitoring and is expected to provide more significant improvements during welding using pulsed current where the process becomes highly dynamic.

Digital Object Identifier (DOI)

https://doi.org/10.13023/etd.2020.142

Share

COinS