Author ORCID Identifier

Date Available


Year of Publication


Degree Name

Doctor of Philosophy (PhD)

Document Type

Doctoral Dissertation


Business and Economics



First Advisor

Dr. Urton Anderson

Second Advisor

Dr. Benjamin Commerford


Artificial intelligence (AI) systems’ capability is rapidly expanding to perform complex tasks once reserved only for humans. With machine learning algorithms, AI can learn and adapt as it encounters more data, which has enabled these systems to improve the quality of accounting estimates that traditionally have been more difficult for humans. Although AI systems’ capability to adapt has potential benefits, these systems also have become increasingly complex, making it difficult for individuals to understand the processes or algorithms these systems use to produce advice. Practitioners worry that when algorithms behave like “black boxes” this opacity may lead to a lack of reliance on evidence provided by these systems. This study seeks to examine and understand whether the degree of measurement uncertainty within a complex estimate influences the weight individuals will place on advice provided by an algorithm and whether that relationship depends on the algorithm’s capability of adapting. I experimentally demonstrate that higher levels of measurement uncertainty increase individuals’ reliance on advice provided by an algorithm, but only if that advice is produced by an algorithm capable of adapting (versus an algorithm not capable of adapting). I also find that this joint effect of measurement uncertainty and algorithm adaptability on individual’s advice utilization operates indirectly through individuals’ willingness to trust the algorithm. This study provides important insights to firms that are planning to deploy AI systems that will assist accounting professionals with developing and evaluating complex estimates.

Digital Object Identifier (DOI)

Included in

Accounting Commons