Date of Graduation


Document Type


Degree Type



Statler College of Engineering and Mineral Resources


Lane Department of Computer Science and Electrical Engineering

Committee Chair

Yaser P. Fallah

Committee Co-Chair

Parviz Famouri

Committee Member

Parviz Famouri

Committee Member

Muhammad A. Choudhry

Committee Member

Vinod K. Kulathumani

Committee Member

Scott W. Wayne


Information obtainable from Intelligent Transportation Systems (ITS) provides the possibility of improving the safety and efficiency of vehicles at different levels. In particular, such information has the potential to be utilized for prediction of driving conditions and traffic flow, which allows us to improve the performance of the control systems in different vehicular applications, such as Hybrid Electric Vehicles (HEVs) powertrain control and Cooperative Adaptive Cruise Control (CACC). In the first part of this work, we study the design of an MPC controller for a Cooperative Adaptive Cruise Control (CACC) system, which is an automated application that provides the drivers with extra benefits, such as traffic throughput maximization and collision avoidance. CACC systems must be designed in a way that are sufficiently robust against all special maneuvers such as interfering vehicles cutting-into the CACC platoons or hard braking by leading cars. To address this problem, we first propose a Neural- Network (NN)-based cut-in detection and trajectory prediction scheme. Then, the predicted trajectory of each vehicle in the adjacent lanes is used to estimate the probability of that vehicle cutting-into the CACC platoon. To consider the calculated probability in control system decisions, a Stochastic Model Predictive Controller (SMPC) needs to be designed which incorporates this cut-in probability, and enhances the reaction against the detected dangerous cut-in maneuver. However, in this work, we propose an alternative way of solving this problem. We convert the SMPC problem into modeling the CACC as a Stochastic Hybrid System (SHS) while we still use a deterministic MPC controller running in the only state of the SHS model. Finally, we find the conditions under which the designed deterministic controller is stable and feasible for the proposed SHS model of the CACC platoon. In the second part of this work, we propose to improve the performance of one of the most promising realtime powertrain control strategies, called Adaptive Equivalent Consumption Minimization Strategy (AECMS), using predicted driving conditions. In this part, two different real-time powertrain control strategies are proposed for HEVs. The first proposed method, including three different variations, introduces an adjustment factor for the cost of using electrical energy (equivalent factor) in AECMS. The factor is proportional to the predicted energy requirements of the vehicle, regenerative braking energy, and the cost of battery charging and discharging in a finite time window. Simulation results using detailed vehicle powertrain models illustrate that the proposed control strategies improve the performance of AECMS in terms of fuel economy by 4\%. Finally, we integrate the recent development in reinforcement learning to design a novel multi-level power distribution control. The proposed controller reacts in two levels, namely high-level and low-level control. The high-level control decision estimates the most probable driving profile matched to the current (and near future) state of the vehicle. Then, the corresponding low-level controller of the selected profile is utilized to distribute the requested power between Electric Motor (EM) and Internal Combustion Engine (ICE). This is important because there is no other prior work addressing this problem using a controller which can adjust its decision to the driving pattern. We proposed to use two reinforcement learning agents in two levels of abstraction. The first agent, selects the most optimal low-level controller (second agent) based on the overall pattern of the drive cycle in the near past and future, i.e., urban, highway and harsh. Then, the selected agent by the high-level controller (first agent) decides how to distribute the demanded power between the EM and ICE. We found that by carefully designing a training scheme, it is possible to effectively improve the performance of this data-driven controller. Simulation results show up to 6\% improvement in fuel economy compared to the AECMS.