Imagine a system that has a property that we can measure. And now imagine that this property evolves with time. We call this system a dynamical system. Many things can be studied as a dynamical system: a car (position and velocity), a robot (joint angles and velocities), the power network (voltage and frequency), the metabolic newtork (metabolites concentrations), etc. In control theory, we are interested in understanding how to analyze and control the evolution of these systems so they have desirable properties: the car does not crash, the voltage is kept within limits, the robot moves to the desired position, etc. During my Ph.D., I studied how to effective design control strategies for large sparse networks. Lately, I have revived my fascination for natural languge and I am very intersted in language technologies. In particular, (i) how to use natural language to design standard control strategies (such as those for robots or cars), and (ii) how language algorithms can be viewed as dynamical systems, so control theory can help us to improve their design.

Language to Control

Current principled strategies to control dynamical systems involve properly describing the problem in mathematical terms. This requires trained and skilled engineers to design the appropriate commands, which creates a gap in the communication possibilities between lay-people and technology. Ideally, we would speak to our machines to control their behavior in the same way that we talk to each other and influence one another. However, relying solely on language algorithms to do this could be a dangerous route given their current limitations. My goal is to interface the current principled control strategies with language algorithms, allowing us to talk to our machines with the flexibility and accessibility of natural language, while still benefiting from the theoretical guarantees and explainable behavior of classical control techniques. We are exploring these ideas for robotic applications (check out NARRATE!) among others (coming soon)!

Control for Language

Language algorithms are currently treated as black boxes, leading to a significant lack of understanding of their behavior. Yet, language algorithms are also a dynamical system: each token (word) produced can be seen as the property that changes with time. Can we use control theoretical tools to enhance our understanding of these systems? Can we study them within a principled mathematical framework? Ultimately, can we have better control on their outputs? I aim to answer these questions in two ways. On one hand, I work with novel foundation model architectures such as state-space models --a native framework to control theorists-- to provide insights on their overall behavior and relevant components (check out some of our work!). On the other hand, I apply classical control techinques to standard architectures to steer their behavior towards a more desirable one, such as reducing toxicity (check out our latest work!).

Distributed Optimal Control

The internet, the power network, a fleet of autonomous cars, or bacterial metabolism: large-scale networks are everywhere! We would like to design strategies to control their behavior in the same way that we do for centralized systems (a single car, an aircraft, etc.). However, these networks are composed by many agents (subsystems) that can only communicate to a few neighbors. The sparsity of the networks has been seen as a mejor challenge, as they hinder the global communication exchanges. Contrary to prior works, my Ph.D. research capitalizes on this sparsity: we restrict each agent in the network to only communicate with a small neighborhood of agents as opposed to the entire network. By integrating ideas from control theory, optimization and learning into this framing, we developed a new set of algorithms and theoretical results to optimally control distributed bio and cyber-physical systems for safety-critical applications. Specifically, we created a Distributed and Localized Model Predictive Control (DLMPC) framework, which offers numerous advantages! Notably, both the controller synthesis and implementation are fully distributed and scalable to arbitrary network sizes (see details here). Also, we provide theoretical guarantees for asymptotic stability and recursive feasibility without strong assumptions, ensuring minimal conservativeness, no added computational burden, and scalable computations (see details here). Our findings indicate that restricting communication to a small neighborhood can yield significant computational benefits. This leads us to an important question: what are the trade-offs? Are the resulting policies less optimal than those that allow communication across the entire network? We demonstrate that for sparse systems, including local communication constraints can be as optimal as centralized MPC while enhancing scalability (see details here). This result paves the way for broader applications of DLMPC. Besides distributed systems, DLMPC can replace the standard MPC in centralized systems to accelerate computations. Given that current GPUs also operate with local communication exchanges, DLMPC computation can be naturally parallelized in GPU (see details here). Lastly, we showed that our framework is applicable even when no model of the system is available and only past trajectories can be used (just like language!). In this scenario, each agent only needs to gather data from its local neighborhood, making the approach suitable for large-scale networks (see details here). Together, these different frontiers can be layered to create effective and scalable control architectures for real-world large-scale systems. My PhD work shows how all this can be achieved by leveraging the intrinsic and ubiquitous sparsity of large-scale distributed systems in technology and biology.