Author ORCID Identifier

https://orcid.org/0000-0001-9758-9744

Date Available

1-1-2025

Year of Publication

2025

Document Type

Doctoral Dissertation

Degree Name

Doctor of Philosophy (PhD)

College

Engineering

Department/School/Program

Computer Science

Advisor

Dr. Hana Khamfroush

Abstract

There are several definitions for Smart Cities. One common key point of these definitions is that smart cities are technologically advanced cities which connect everything in a complex urban environment including infrastructure, information, and even people to cope with the crucial problems linked with the urban life such as traffic, pollution, city crowding, health, and poverty. Central to this vision are the Internet of Things (IoT) and Big Data, where interconnected devices with sensors collect vast amounts of data for informed decision-making. However, the rapid expansion of IoT devices challenges efficient data processing while meeting diverse Quality-of-Service (QoS) requirements; for instance, in a smart home applications like fire detection require minimal latency, whereas smart plant watering tolerates higher delays. Additionally, several new applications in the smart cities are dependent on a Machine Learning (ML) model which are computationally intensive. These applications often exceed IoT devices' capabilities, and transmitting large datasets to distant servers is resource-intensive. Edge Computing (EC) provides a solution by decentralizing computational resources closer to data sources, creating a three-tier architecture of user devices, edge servers, and cloud servers. This reduces latency and alleviates cloud burdens but introduces new resource management challenges due to limited and heterogeneous edge server capacities, making efficient resource allocation critical. Edge Intelligence (EI) extends EC benefits by integrating ML capabilities at the network edge, enabling real-time data analysis and autonomous decision-making, enhancing service quality, and preserving user privacy. However, constrained edge resources necessitate sophisticated resource management strategies to balance conflicting QoS metrics like computational latency and ML model performance. Despite EI's potential, a research gap exists in effectively managing resources within EI systems to balance QoS trade-offs in a three-tier architecture. Existing studies often focus on service placement and task offloading without fully addressing the complexities introduced by ML applications and their unique performance requirements. This project addresses this gap by investigating resource management strategies for EI systems in Smart Cities. It focuses on: (1) intelligent offloading of EI tasks within the architecture, (2) incorporating request prioritization to ensure critical services receive necessary resources, (3) combining offloading decisions with data compression to optimize network usage, (4) employing distributed learning algorithms to enhance task allocation decisions, and (5) optimizing the trade-off between ML performance metric, latency, and system stability. By exploring these approaches, the project aims to develop a comprehensive resource management framework that maximizes QoS for end-users while efficiently utilizing limited edge resources.

Digital Object Identifier (DOI)

https://doi.org/10.13023/etd.2024.533

Funding Information

This study was supported by National Science Foundation (NSF) and the Cisco Systems Inc. under the grant numbers 1948387 and 1215519250, respectively.

Share

COinS