Date of Award

12-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mechanical Engineering

Committee Chair/Advisor

Dr. Javad Mohammadpour Velni

Committee Member

Dr. Ardalan Vahidi

Committee Member

Dr. Hassan Masoud

Committee Member

Dr. Phanindra Tallapragada

Abstract

This dissertation explores the interactions between Model Predictive Control (MPC), Safety-critical Control, and Artificial Intelligence (AI)/Machine Learning (ML) methods, with a particular focus on Reinforcement Learning (RL) and Bayesian Optimization (BO). We then leverage AI/ML to address several challenges in the control design and safe operation of uncertain dynamical systems. In many applications, ranging from autonomous vehicles and robotics to energy systems and industrial processes, ensuring safety is as essential as satisfying control objectives. The use of Control Barrier Functions (CBFs) within the MPC framework recently has emerged as a powerful tool to guarantee safety by enforcing constraints in optimal control problem. Despite their efficacy, integrating CBFs into MPC comes with challenges, particularly when coping with system uncertainties or external disturbances. To overcome these challenges, in this dissertation, we design adaptive CBFs and robust formulations that enhance the ability of the MPC to operate safely in uncertain environments. We also leverage BO and RL methods to refine the MPC scheme in a performance-oriented manner for single-agent systems, as well as a Distributed MPC (DMPC) framework for multi-agent systems when the underlying prediction models cannot perfectly capture the real systems.

In the first part of this dissertation, we present two adaptive MPC-CBF frameworks, applied to Autonomous Underwater Vehicles (AUVs) and Unmanned Aerial Vehicles (UAVs), to address safety-critical motion planning under external disturbances. For AUVs, we propose a robust CBF-MPC framework augmented with Moving Horizon Estimation (MHE) to cope with model mismatch and disturbance inputs. For UAVs, we formulate a robust CBF within the MPC framework for safe net-recovery landings in the presence of wind disturbances. Both frameworks demonstrate the importance of integrating adaptive safety constraints to enhance motion planning performance while ensuring safety in unknown (or partially known) environments.

In the second part, we integrate the MPC framework with RL to enhance robustness and closed-loop performance. We investigate the combination of safety-critical stochastic MPC and RL to generate optimal closed-loop policies in the presence of model mismatch and unknown disturbances. Furthermore, we devise a Multi-Agent RL (MARL) framework based on DMPC to enable cooperative control in interconnected and networked systems. A key contribution of this work is the introduction of a parameterized DMPC-based MARL method that adjusts DMPC parameters to achieve optimal closed-loop control performance in complex and uncertain environments. We further explore data-driven control methods using Koopman Operator (KO) theory to design nonlinear MPC schemes combined with RL, addressing Koopman model inaccuracies through performance-oriented learning.

In the third part of this dissertation, we investigate the use of BO and Multi-Objective Bayesian Optimization (MOBO) for refining parameterized MPC and DMPC schemes. We introduce a novel approach that leverages Multi-agent Bayesian Optimization (MABO) to design DMPC schemes for multi-agent systems. The primary objective is to learn optimal DMPC schemes even when local model predictive controllers rely on imperfect local models. As a second contribution presented in this part, we propose a framework that integrates MPC-RL with MOBO. The proposed MPC-RL-MOBO utilizes noisy evaluations of the RL stage cost and its gradient, estimated via a Compatible Deterministic Policy Gradient (CDPG) approach, and incorporates them into a MOBO algorithm using the Expected Hypervolume Improvement (EHVI) acquisition function. This fusion enables efficient and safe tuning of the MPC parameters to achieve improved closed-loop performance, even under model imperfections. Finally, to extend the use of MOBO to a combined estimator-controller structure based on Moving Horizon Estimation (MHE) and MPC, we further propose leveraging MOBO to update the MHE parameters, while employing a stability-aware BO approach for refining the MPC parameters. This strategy enables sample-efficient, stable, and high-performance learning for combined estimation and control design purposes.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.