Author ORCID Identifier

https://orcid.org/0000-0002-2797-2112

Date Available

9-10-2024

Year of Publication

2024

Degree Name

Doctor of Philosophy (PhD)

Document Type

Doctoral Dissertation

College

Engineering

Department/School/Program

Computer Science

First Advisor

Henry G. Dietz

Second Advisor

Simone Sylvestri

Abstract

Traditional photographic practice, as dictated by the properties of photochemical emulsion film, mechanical apparatus, and human operators, largely treats the sensitivity (gain) and integration interval as coarsely parameterized constants for the entire scene, set no later than the time of exposure. This frame-at-a-time capture and processing model permeates digital cameras and computer image processing. Emerging imaging technologies, such as time domain continuous imaging (TDCI), quanta image sensors (QIS), event cameras, and conventional sensors augmented with computational processing and control, provide opportunities to break out of the frame-oriented paradigm and capture a stream of data describing changes to scene appearance over the capture interval with high temporal precision. Captured scene data can then be computationally post-processed to render images with user control over the time interval being sampled and the gain of integration, not just for each image rendered but for every site in each rendered image, allowing the user to ideally expose each portion of the scene. For example, in a scene that contains a mixture of moving elements some of which are more brightly lit, it becomes possible to render dark and light portions with different gains and potentially overlapping intervals, such that both have good contrast, neither one suffers motion blur, and little to no artifacting occurs at the interfaces. This dissertation represents a preliminary exploration of the properties, application, and tooling required to capture TDCI streams and render images from them in a paradigm that supports functional post-capture manipulation of time and gain.

Digital Object Identifier (DOI)

https://doi.org/10.13023/etd.2024.388

Funding Information

This work was supported by the National Science Foundation Grant CSR: Small: Computational Support for Time Domain Continuous Imaging (no.: 1422811) in 2017-2018.

Share

COinS