Stabilization Results for Critical Cases

Roger Brockett

In this talk I will touch on the following set of questions and provide a few answers, all relevant to questions listed below, insofar as they pertain to critical cases for which linearization, in any form, is indecisive. 1. Is there an algorithmic approach to finding a stabilizing feedback control? 2. Can optimal control theory help in the design of feedback stabilization? 3. Does there exist a useful classification of such problems in terms of normal forms? 4. For a given system, what is the best rate of convergence that can be achieved with feedback? 5. Assuming there exists a feedback control giving asymptotic stability, do polynomial Liapunov functions exist and, if so, what is the lowest degree possible? 6. Is there a way to make simulation answer decisively the question of whether or not the trajectory from a given initial condition goes to zero? The results will include what I believe to be a new method for showing stability which does not involve Liapunov theory.