- Scalable neural network verification
Neural network verification considers the problem of certifying if the output of a NN satisfies certain properties for a given input set. This is a fundamental problem in certifying the robustness of NNs and the safety of NN-controlled systems.
In [J2] we use Alternating Direction Method of Multipliers (ADMM) to derive a NN verification method, DeepSplit, that is scalable, modular, amenable to GPU acceleration and enjoys fast theoretical convergence guarantees.
2. Stability analysis of neural network-enabled systems
Synthesizing control certificate functions such as Lyapunov functions for NN-controlled systems is more involved than NN verification, and the joint design of stability or reachability analysis framework and NN verification method is desired.
In [C6], we propose a counterexample-guided Lyapunov function synthesis framework for linear hybrid systems with finite-step termination guarantees. This method is extended to automating stability analysis of general nonlinear systems through their NN dynamics approximation and model error quantification in [C7].
3. Robust model predictive control through system level synthesis
Robust model predictive control is challenging when both model uncertainty and additive disturbances are considered which is the case for many real-world applications. It is hard to obtain a good trade-off between conservatism and computational complexity.
We propose a novel robust MPC method using System Level Synthesis and search for robust feedback controllers in the space of closed-loop system responses. This parameterization reveals additional structured properties of the robust control problem that can be exploited for high-performance controller design.
In [P1], we compare the proposed method, SLS MPC, with six other representative robust MPC methods through extensive simulation. Numerical examples show that SLS MPC can significantly reduce conservatism compared with all baselines.