Plenary talks

Anuradha Annaswamy
Active-adaptive Control Laboratory
Department of Mechanical Engineering
Massachusetts Institute of Technology

Title: Lessons from Adaptive Control: Towards Real-time Machine Learning.
Abstract: The fields of adaptive control and machine learning have evolved in parallel over the past few decades, with a significant overlap in goals, problem statements, and tools. Machine learning as a field has focused on computer based systems that improve through experience. Often times the process of learning is encapsulated in the form of a parameterized model such as a neural network, whose weights are trained in order to approximate a function. The field of adaptive control, on the other hand, has focused on the process of controlling engineering systems in order to accomplish regulation and tracking of critical variables of interest. Learning is embedded in this process via online estimation of the underlying parameters. Whether in machine learning or adaptive control, this learning occurs through the use of input-output data. In both cases, the main algorithm used for updating the parameters is based on a gradient descent-like algorithm. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity. As the scope of problems in both topics increases, the associated complexity and challenges increase as well. In order to address learning and decision-making in real time, it is essential to understand these similarities and connections to develop new methods, tools, and algorithms.
BIO: Founder and Director of the Active-Adaptive Control Laboratory in the Department of Mechanical Engineering at MIT. Her research interests span adaptive control theory and its applications to aerospace, automotive, propulsion, and energy systems as well as cyber physical systems such as Smart Grids, Smart Cities, and Smart Infrastructures. She has received best paper awards (Axelby; CSM), as well as Distinguished Member and Distinguished Lecturer awards from the IEEE Control Systems Society (CSS) and a Presidential Young Investigator award from NSF. She is a Fellow of IEEE and International Federation of Automatic Control. She is the recipient of the Distinguished Alumni award from Indian Institute of Science for 2021. Anu Annaswamy is the author of a graduate textbook on adaptive control, co-editor of two vision documents on smart grids as well as two editions of the Impact of Control Technology report, and a coauthor of a 2021 National Academy of Sciences Committee report on the Future of Electric Power in the United States. She served as the President of CSS in 2020. She has been serving as a Faculty Lead in the Electric Power Systems workstream in the MIT Future Energy Systems Center since September 2021.


Photo for Kristic

Miroslav Krstic
Alspach endowed chair in Dynamic Systems and Control
Distinguished Professor of Mechanical and Aerospace Engineering
University of California San Diego (UCSD), CA, USA

Title: Prescribed-Time Extremum Seeking.
Abstract: This year is the centennial of the 1922 invention of Extremum Seeking, one of the currently most active areas of learning-based control or model-free adaptive control. It has also been exactly a quarter century since the resurrection of this method through its proof of convergence in 1997. In this lecture I will present new results on accelerating the convergence of ES algorithms from exponential to convergence in user-prescribed finite time. The subject of stabilization in prescribed time emerged in 2017 as an interesting alternative to sliding mode control (SMC) for achieving convergence in a time that is independent of the initial condition, using time-varying feedback gain which grows to infinity as the time approaches the terminal (prescribed) time. Such unbounded gains, multiplying the state that goes to zero an making the control input bounded, are common in optimal control with a hard terminal constraint, such as in classical Proportional Navigation control law in aerospace applications, like target intercept. I will present results, achieved over the past year - 2021 - by two of my students, Cemal Tugrul Yilmaz and Velimir Todorovski, on extending prescribed-time stabilization to prescribed-time extremum seeking. Todorovski solves the problem of source seeking for mobile robots in GPS-denied environments. Yilmaz solves the problem of real-time optimization under large delays on the input and in the presence of PDE (partial differential equation) dynamics. Their designs are model-free and, most importantly, achieve convergence/optimality in a user-prescribed interval of time, independent of initial conditions.
BIO: Miroslav Krstic studies adaptive control, extremum seeking, control of PDE systems including flows, input delay compensation, and stochastic nonlinear control. In addition to about 400 journal papers, he has co-authored 16 books, including “Nonlinear and Adaptive Control Design” (Wiley, 1995), “Real-Time Optimization by Extremum Seeking Control” (Wiley 2023), “Adaptive Control of Parabolic PDEs” (Princeton University Press, 2010), “Model-Free Stabilization by Extremum Seeking” (Springer, 2017), and most recently, "Delay-Adaptive Linear Control" (Princeton, 2020). Since 2012 he has divided his time between his research and serving as Senior Associate Vice Chancellor for Research at UC San Diego. Krstic is a recipient of the Bellman Award, SIAM Reid Prize, ASME Oldenburger Medal, and a dozen other awards. He is a foreign member of the Serbain Academy of Sciences and Arts and Fellow of IEEE, IFAC, ASME, SIAM, AAAS, IET (UK), and AIAA-AF. He is the EiC of Systems & Control Letters. In Automatica, he oversees the editorial areas of adaptive systems and distributed parameter systems.

Claire J. Tomlin
Charles A. Desoer Chair in the College of Engineering
Professor, Electrical Engineering and Computer Sciences
UC Berkeley, Berkeley CA, USA

Title: Safe Learning in Control.
Abstract: In many applications of autonomy in robotics, guarantees that constraints are satisfied throughout the learning process are paramount. We present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. Then, we present methods for combining reachability with learning-based methods, to enable performance improvement while maintaining safety, and to move towards safe robot control with learned models of the dynamics and the environment. We will discuss different interaction models with other agents. Finally, we will illustrate these safe learning methods on robotic platforms at Berkeley, including demonstrations of motion planning around people, and navigating in a priori unknown environments.
BIO: Claire Tomlin is a Professor and Chair of the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley. Claire received her BASc in EE from the University of Waterloo in 1992, her M.Sc. in EE from Imperial College, London, in 1993, and her PhD in EECS from Berkeley in 1998. She held the positions of Assistant, Associate, and Full Professor at Stanford from 1998-2007, and in 2005 joined Berkeley. Claire works in hybrid systems and control, and integrates machine learning methods with control theoretic methods in the field of safe learning. She works in the applications of air traffic and unmanned air vehicle systems. Claire is a MacArthur Foundation Fellow, an IFAC, IEEE, and AIMBE Fellow. She was awarded the Donald P. Eckman Award of the American Automatic Control Council in 2003, an Honorary Doctorate from KTH in 2016, and in 2017 she won the IEEE Transportation Technologies Award. In 2019, she was elected to the National Academy of Engineering and the American Academy of Arts and Sciences.