Intelligent Adaptive Control

The focus of this theoretical research is on developing novel control methodologies that robustly stabilize and smoothly transition between versatile, dynamic behaviors of robotics and cyber-physical systems. Existing formal control methods that build on rigorous mathematical theories provide an accurate means of designing feedback controllers and analyzing the dynamical properties of the system. However, classical controllers now struggle with the increasing complexity of robotic systems. These controllers often focus on stabilizing the system around a pre-specified equilibrium point or trajectory and are unable to adjust the behavior of the system based on the available feedback information. This is particularly important for robotics, whose tasks require dynamic adaptation to the environment during operation. With the goal of bridging this gap, it is advantageous to develop algorithmic approaches that combine high-level feedback planning with low-level controllers. Specifically, we harness the power of modern optimization techniques and machine learning to develop an adaptive feedback control policy that provides improved system performance and robustness. We rigorously develop the fundamental mathematical formulation of such novel control methodology and experimentally demonstrate dynamic and versatile motions on a variety of novel robotic platforms with the result being the achievement of unprecedented performance that exceeds perceived limitations. The theoretical advances gained through these goals will be deployed to multiple levels of robotic, nonlinear, cyber-physical and autonomous systems. Because of the strong connection with hardware implementations, the results have the potential to profoundly impact the next generation of robotic technology, including humanoid robots for space exploration and rescue and search missions, increased autonomy in robotics, and wearable robotic devices that improve the quality of life of a growing mobility-impaired population.

Feedback Control Policy Design for 3D Bipedal Locomotion using Reinforcement Learning

Related Publications

Castillo, G., Weng, B., Hereid, A. and Zhang, W.
Hybrid Zero Dynamics Inspired Feedback Control Policy Design for 3D Bipedal Locomotion using Reinforcement Learning
Submitted to IEEE International Conference on Robotics and Automation (ICRA)2020

[arXiv] [Video]

Castillo, G., Weng, B., Hereid, A. and Zhang, W.
Reinforcement learning meets hybrid zero dynamics: a case study for RABBIT
IEEE International Conference on Robotics and Automation (ICRA)2019

[arXiv] [Video]

Chen, Y., Hereid, A., Peng, H. and Grizzle, J.
Enhancing the performance of a safe controller via supervised learning for truck lateral control
ASME Journal of Dynamic Systems, Measurement, Control2019


Gong, Y., Hartley, R., Da, X., Hereid, A., Harib, O., Huang, J.-K. and Grizzle, J.
Feedback control of a Cassie bipedal robot: walking, standing, and riding a segway
Annual American Control Conference (ACC), 2019

[arXiv] [Video]

Harib, O., Hereid, A., Agrawal, A., Gurriet, T., Finet, S., Boeris, G., Duburcq, A., Mungai, M. E., Masselin, M., Ames, A. D., Sreenath, K. and Grizzle, J.
Feedback control of an exoskeleton for paraplegics: toward robustly stable hands-free dynamic walking
IEEE Control System Maganize (CSM), 2018, Vol. 38(6), pp. 61-87

[DOI] [Video]