Date Available

5-8-2020

Year of Publication

2020

Degree Name

Master of Science in Electrical Engineering (MSEE)

Document Type

Master's Thesis

College

Engineering

Department/School/Program

Electrical and Computer Engineering

First Advisor

Dr. Hasan Poonawala

Second Advisor

Dr. Jihye Bae

Abstract

Advances in computing power in recent years have facilitated developments in autonomous robotic systems. These robotic systems can be used in prosthetic limbs, wearhouse packaging and sorting, assembly line production, as well as many other applications. Designing these autonomous systems typically requires robotic system and world models (for classical control based strategies) or time consuming and computationally expensive training (for learning based strategies). Often these requirements are difficult to fulfill. There are ways to combine classical control and learning based strategies that can mitigate both requirements. One of these ways is to use a gravity compensated torque control with reinforcement learning (RL). We present an analysis of torque control with and without gravity compensation when coupled with RL in a reaching task using a simulated seven-degree-of-freedom robotic arm.

The results of our experiments demonstrate that gravity compensation coupled with RL (while requiring that only the gravity vector be modeled) reduces the training required in some (but not all) reaching tasks. Specifically, the benefits of training with gravity compensated torque control appear to be contingent on goal location. We show that when the goal location is high, gravity compensation has a grater advantage while with a low goal location gravity compensation has less advantage or has a disadvantage in training.

Digital Object Identifier (DOI)

https://doi.org/10.13023/etd.2020.190

Share

COinS