Date of Award

12-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Automotive Engineering

Committee Chair/Advisor

Venkat Krovi

Committee Member

Rahul Rai

Committee Member

Phanindra Tallapragada

Committee Member

Umesh Vaidya

Abstract

Conventional wheeled ground vehicles have been used for rough terrain navigation in the recent years. They consist of a chassis connected to wheels through passive, semi-active, or active suspension systems. However, their fixed configurations limit mobility and maneuverability, constraining their ability to autonomously navigate diverse and rough terrains. Autonomous Ground Vehicles (AGVs) face significant challenges in this regard, including varying terrain roughness, soil hardness, and obstacle crossing.

To address these limitations, Actively Articulated Wheeled Vehicle (AAWV) architectures have recently emerged, offering real-time geometric adaptability. AAWVs have chassis and wheels connected via articulated serial or parallel linkages. However, increased articulation introduces additional complexity in the coordinated control of multiple actuators, leading to high-dimensional modeling requirements and, consideration of contact constraints and tackling kinematic and actuation redundancies. This complexity poses a major challenge for purely model-based control approaches, as vehicle and terrain variability demands more generalizable models.

Model-free Deep Reinforcement Learning (DRL) has gained popularity for handling such systems, given its ability to learn control policies without explicit modeling. The wheeled-legged robot community has demonstrated various learning-based approaches for biped-wheeled and quadruped-wheeled robot. However, DRL suffers from sample inefficiency, safety concerns during early training, and susceptibility to overfitting specific scenarios. Drawing from the Physics-Informed Machine Learning paradigm, combining model-based and learning-based methods can mitigate these shortcomings. Model-based controllers can provide reliable baseline actions that guarantee stability in simplified scenarios, while DRL can adapt to diverse and complex conditions. Together, they can enhance robustness, explainability, and generalizability.

This dissertation demonstrates the integration of model-based and DRL methods within a unified Hybrid Deep Reinforcement Learning (HDRL) framework that scales effectively to complex systems. Given the extensive training required, a simulation-first approach is adopted. We first illustrate the technological frameworks that enable the development and evaluation of AI-driven AAWVs. Subsequently, we present analysis and results on HDRL architectures applied to three representative cases: path tracking with an Ackermann-steered robot, trajectory tracking with a single biped-wheeled robot, and coordinated payload transport using multiple biped-wheeled robots. The results indicate that, depending on the modeling approach and task complexity, HDRL provides greater robustness, adaptability, sample efficiency, and explainability compared to single-method alternatives.

Author ORCID Identifier

0009-0004-1807-2173

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.