Author ORCID Identifier

https://orcid.org/0000-0001-6590-1334

Date Available

4-28-2020

Year of Publication

2020

Degree Name

Doctor of Philosophy (PhD)

Document Type

Doctoral Dissertation

College

Arts and Sciences

Department/School/Program

Mathematics

First Advisor

Dr. Qiang Ye

Abstract

Despite the recent success of various machine learning techniques, there are still numerous obstacles that must be overcome. One obstacle is known as the vanishing/exploding gradient problem. This problem refers to gradients that either become zero or unbounded. This is a well known problem that commonly occurs in Recurrent Neural Networks (RNNs). In this work we describe how this problem can be mitigated, establish three different architectures that are designed to avoid this issue, and derive update schemes for each architecture. Another portion of this work focuses on the often used technique of batch normalization. Although found to be successful in decreasing training times and in preventing overfitting, it is still unknown why this technique works. In this paper we describe batch normalization and provide a potential alternative with the end goal of improving our understanding of how batch normalization works.

Digital Object Identifier (DOI)

https://doi.org/10.13023/etd.2020.223

Share

COinS