Author ORCID Identifier

0000-0003-1754-754X

Document Type

Dissertation

Date of Award

5-31-2025

Degree Name

Doctor of Philosophy in Electrical Engineering - (Ph.D.)

Department

Electrical and Computer Engineering

First Advisor

Jay J. Han

Second Advisor

Nirwan Ansari

Third Advisor

Roberto Rojas-Cessa

Fourth Advisor

Qing Gary Liu

Fifth Advisor

Chen Chen

Abstract

Mixed reality (MR) and augmented reality (AR) systems are reshaping digital experiences by seamlessly integrating physical and virtual environments. This dissertation presents a comprehensive framework for next-generation immersive systems, combining advances in real-time data processing, multi-user synchronization, and secure communication. The core contributions are structured around three interconnected systems: MediVerse, TeleAvatar, and MultiAvatarLink, each addressing critical challenges in mobile MR.

MediVerse is a secure and scalable framework for real-time health and performance monitoring, integrating intelligent IoT sensors, wearable technologies, and MR interfaces. It supports multi-camera fusion, adaptive compression, and real-time three-dimensional (3D) point cloud generation, enhancing data accuracy and responsiveness for applications like healthcare diagnostics, sports analytics, and interactive training. It also incorporates advanced data security and privacy mechanisms, aligning with the foundational principles of healthcare data protection.

TeleAvatar enables real-time synchronization between humans and 3D avatars in mixed reality environments. It relies on hybrid inverse kinematics, 3D keypoint mapping, and single-camera tracking to provide precise, real-time avatar control for single-user scenarios. This system is optimized for latency reduction and bandwidth efficiency, making it ideal for training simulations, remote collaboration, and personalized virtual interactions.

MultiAvatarLink extends the single-user focus to multi-user environments, integrating joint network selection, hybrid inverse kinematics, and deep learning-based multi-user identification. It supports real-time communication for multiple users with a single camera, optimizing bandwidth while preserving real-time responsiveness. This approach allows for seamless, context-aware avatar tracking, critical for multi-user MR applications.

Together, these systems form a unified foundation for real-time MR, addressing critical challenges in latency, scalability, data security, and multi-user synchronization. They provide a blueprint for the next generation of immersive systems, offering transformative solutions for healthcare, manufacturing, education, and beyond. When fully integrated, these systems create a comprehensive multi-camera, multi-user MR platform, pushing the boundaries of immersive digital experiences and laying the groundwork for future innovations in AI-driven, multi-modal communication.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.