Nowadays computer systems are highly complex. Most of them are intrinsically distributed and dynamically reconfigurable (Big Data, Internet of Things, Cloud Computing, Smart Cities), concurrent at both the physical level of CPU cores and the logical level of threads, engaging in complex interactions not only between their own hardware and software parts, but also with their networking environment. Moreover, we are expecting an increase of several orders of magnitude in the size of such interdependent systems, which raises technical problems that are not yet fully understood.
The research of this group focuses on the construction of very large heterogeneous systems, consisting of a myriad of components of different nature, being developed in different ways by different producers. This level of complexity raises an important technical problem: how to ensure that a system consisting of a huge multitude of heterogeneous components actually provides the expected services? Because such complex systems control many aspects of human life (e.g. airline traffic, power grids, banking and social networks), it is important to ensure their correctness a priori, by design, and also a posteriori, by verification.
Understanding large component-based systems is only possible due to their modularity: a system is hierarchically organized as an architecture of components, whose internal details are encapsulated within simple well-defined interfaces. The modularity of a component-based computer system is instrumental in performing updates (replacing one or more components with newer versions having the same interfaces) and reconfigurations (changing the coordinating architecture by adding new components, removing obsolete versions or even changing the topology of interactions.
We view components as units that combine discrete with continuous behavior and can be described using two types of extensions of finite automata: discrete models with countably infinite number of states (counter automata and automata over infinite alphabets) and hybrid models (timed and hybrid automata). The theory of timed automata, automata augmented with clock variables that measure the time the automaton spends in certain states, has proved to be very effective in modeling a large range of phenomena in diverse domains such as real-time programs, scheduling in manufacturing systems and digital circuit timing analysis. Hybrid automata take us further away from discrete systems to the domain of continuous dynamical systems, that combine discrete state changes with continuous variables, whose evolutions are described by differential equations. Much of our work in the domain is concerned with the extension of verification methodology to such continuous and hybrid domains, a task far from being trivial due to the completely different nature of the state-space and trajectories of continuous systems.
Designing and verifying such complex systems requires compositional methods (local and assume/rely-guarantee reasoning), push-button techniques (automated reasoning, model checking, etc.) quantitative behaviour-based methods (monitoring, testing, diagnostics) and rigorous implementation (code generation). The research conducted in this group relies on formal methods developped in VERIMAG during the last 30 years through a number of seminal contributions, such as model checking (Turing Award 2007), component-based design (the BIP framework), monitoring and verification of systems modeled by timed and hybrid automata. This group is a merge of the former RSD and TEMPO teams of VERIMAG.