Title: Lessons from Adaptive Control: Towards Real-time Machine Learning. Abstract: The fields of adaptive control and machine learning have evolved in parallel over the past few decades, with a significant overlap in goals, problem statements, and tools. Machine learning as a field has focused on computer based systems that improve through experience. Often times the process of learning is encapsulated in the form of a parameterized model such as a neural network, whose weights are trained in order to approximate a function. The field of adaptive control, on the other hand, has focused on the process of controlling engineering systems in order to accomplish regulation and tracking of critical variables of interest. Learning is embedded in this process via online estimation of the underlying parameters. Whether in machine learning or adaptive control, this learning occurs through the use of input-output data. In both cases, the main algorithm used for updating the parameters is based on a gradient descent-like algorithm. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity. As the scope of problems in both topics increases, the associated complexity and challenges increase as well. In order to address learning and decision-making in real time, it is essential to understand these similarities and connections to develop new methods, tools, and algorithms. BIO: Founder and Director of the Active-Adaptive Control Laboratory in the Department of Mechanical Engineering at MIT. Her research interests span adaptive control theory and its applications to aerospace, automotive, propulsion, and energy systems as well as cyber physical systems such as Smart Grids, Smart Cities, and Smart Infrastructures. She has received best paper awards (Axelby; CSM), as well as Distinguished Member and Distinguished Lecturer awards from the IEEE Control Systems Society (CSS) and a Presidential Young Investigator award from NSF. She is a Fellow of IEEE and International Federation of Automatic Control. She is the recipient of the Distinguished Alumni award from Indian Institute of Science for 2021. Anu Annaswamy is the author of a graduate textbook on adaptive control, co-editor of two vision documents on smart grids as well as two editions of the Impact of Control Technology report, and a coauthor of a 2021 National Academy of Sciences Committee report on the Future of Electric Power in the United States. She served as the President of CSS in 2020. She has been serving as a Faculty Lead in the Electric Power Systems workstream in the MIT Future Energy Systems Center since September 2021. |
|
Title: Prescribed-Time Extremum Seeking. Abstract: This year is the centennial of the 1922 invention of Extremum Seeking, one of the currently most active areas of learning-based control or model-free adaptive control. It has also been exactly a quarter century since the resurrection of this method through its proof of convergence in 1997. In this lecture I will present new results on accelerating the convergence of ES algorithms from exponential to convergence in user-prescribed finite time. The subject of stabilization in prescribed time emerged in 2017 as an interesting alternative to sliding mode control (SMC) for achieving convergence in a time that is independent of the initial condition, using time-varying feedback gain which grows to infinity as the time approaches the terminal (prescribed) time. Such unbounded gains, multiplying the state that goes to zero an making the control input bounded, are common in optimal control with a hard terminal constraint, such as in classical Proportional Navigation control law in aerospace applications, like target intercept. I will present results, achieved over the past year - 2021 - by two of my students, Cemal Tugrul Yilmaz and Velimir Todorovski, on extending prescribed-time stabilization to prescribed-time extremum seeking. Todorovski solves the problem of source seeking for mobile robots in GPS-denied environments. Yilmaz solves the problem of real-time optimization under large delays on the input and in the presence of PDE (partial differential equation) dynamics. Their designs are model-free and, most importantly, achieve convergence/optimality in a user-prescribed interval of time, independent of initial conditions. BIO: Miroslav Krstic studies adaptive control, extremum seeking, control of PDE systems including flows, input delay compensation, and stochastic nonlinear control. In addition to about 400 journal papers, he has co-authored 16 books, including “Nonlinear and Adaptive Control Design” (Wiley, 1995), “Real-Time Optimization by Extremum Seeking Control” (Wiley 2023), “Adaptive Control of Parabolic PDEs” (Princeton University Press, 2010), “Model-Free Stabilization by Extremum Seeking” (Springer, 2017), and most recently, "Delay-Adaptive Linear Control" (Princeton, 2020). Since 2012 he has divided his time between his research and serving as Senior Associate Vice Chancellor for Research at UC San Diego. Krstic is a recipient of the Bellman Award, SIAM Reid Prize, ASME Oldenburger Medal, and a dozen other awards. He is a foreign member of the Serbain Academy of Sciences and Arts and Fellow of IEEE, IFAC, ASME, SIAM, AAAS, IET (UK), and AIAA-AF. He is the EiC of Systems & Control Letters. In Automatica, he oversees the editorial areas of adaptive systems and distributed parameter systems. |
|
Title: Safe Learning in Control. |
All rights reserved ALCOS 2022 (c) - design by K. EL MAJDOUB