Scalable model-checking of Global and Local properties of Collective Systems
Michele Loreti, University of Camerino
Collective Systems (CS) are composed of heterogeneous entities that interact with each other to reach local and global goals. These entities operate without any centralized control and should adapt their behaviour to the changes in their environment. Modelling such systems is challenging and needs an appropriate, scalable analysis tool to verify the proposed model's properties before implementing these systems.
We present a novel logical framework, named GLoTL, that permits specifying properties of CS at both global and local levels. The former refers to the properties concerning a system as a whole. The latter focus on the behavior of the single agent operating in a specific context.
- algorithms are introduced to verify if a given population of agents satisfy or not a GLoTL formula. In particular, we will focus on a scalable approach that permits verifying the properties when the number of agents in the system increases. A simple scenario is used to evaluate the advantages of the proposed approach.
Clustering Spatio-Temporal Data using Temporal Logic Formulations
Laura Nenzi, University of Trieste
Spatially distributed systems, such as the Internet of Things and multi-agent cyber-physical setups, produce vast spatio-temporal datasets. While much focus has been on extracting logical properties from temporal data using Signal Temporal Logic (STL), the spatio-temporal dimension remains underexplored. We present innovative algorithms for unsupervised learning tailored for spatio-temporal data. Our approach automatically maps data to a parametric spatio-temporal logic, PSTREL. Utilizing an agglomerative clustering method, we align each cluster with a unique STREL formula. We ensure bounded STREL formulae generation with a novel decision-tree technique, expanding upon existing STL methodologies. Our approach's versatility is showcased across domains like urban transit and air quality monitoring.
The VoxLogicA project: an overview
Vincenzo Ciancia, ISTI-CNR Pisa
Spatial and spatio-temporal model checking have been explored mostly in the last decade, and some tools have recently emerged from the EU project QUANTICOL. This talk will focus on the follow-up at ISTI-CNR, specifically describing the spatial model checker VoxLogicA. The talk will cover its application to medical imaging case studies and its extension to polyhedral logics, with applications to 3D meshes. The talk will also discuss more recent results on the minimization of models up to logical equivalence, and the ongoing efforts in UI design, exploiting human-computer interaction methodologies, in on-GPU optimization, and in "hybridization" with some machine-learning methods.
Team Automata: an overview of recent results
Maurice Henri ter Beek, ISTI-CNR Pisa
We provide an overview of recent work on Team Automata, which describe networks of automata with input and output actions, extended with synchronization policies guiding how many interacting components can synchronize on a shared input/output action. First, we briefly revisit this notion of synchronization in other well-known concurrency models, such as Reo, BIP, Choreography Automata, and Multiparty Session Types. Then we introduce Featured Team Automata, which support variability by describing families of concrete product models for specific configurations determined by feature selection. Given a (featured) team automaton one can reason over communication-safety properties, like receptiveness (sent messages must be received) and responsiveness (pending receives must be satisfied), but doing so product-wise quickly becomes impractical. Therefore, we lift these notions to the level of family models and show how to identify such communication properties. However, the purely semantic nature of communication properties is a serious burden in practice, making it challenging to automatically verify such properties in concrete cases: one has to go through all reachable states of networks of interacting automata with large state spaces and check compliance for all requirements at each state. We present a solution by providing the first logical characterization of communication properties for team automata (and subsumed models) using test-free propositional dynamic logic and, subsequently, using this characterization to actually verify communication properties by using available model-checking tools for dynamic logic. An open-source prototypical tool supports the developed theory, using a transformation to interact with the mCRL2 toolset for model checking. Finally, we address realizability of team automata, i.e., how to infer a network of interacting automata from a global specification, taking into account that this realization should satisfy exactly the same properties as the global specification. We identify and compare two variants to realize such global models, both relying on bisimulation equivalence. Then we investigate, for both variants, realizability conditions to be checked on global models. We propose a synthesis method for the construction of realizations by grouping locally indistinguishable states. The theory is accompanied by an open-source prototypical tool that implements reliability checks and synthesizes realizations.
Statistical model checking meets Process mining: white-box validation of quantitative product lines and attack trees.
Andrea Vandin, Sant’Anna Pisa
We propose a novel methodology integrating Statistical Model Checking (SMC) with Process Mining (PM). We apply it to the QFLan family of languages, allowing to describe software product line (PL) and attack trees (AT) models. SMC is a family of analysis techniques based on the generation of samples of the dynamics of a system. SMC aims at estimating properties of a system like the probability of a given event (e.g., installing a feature), or the expected value of quantities in it (e.g., the average price of products from the studied family). Instead, PM is a family of data-driven techniques that uses logs collected on the execution of an information system to identify and reason about its underlying execution process. This often regards identifying and reasoning about process patterns, bottlenecks, and possibilities for improvement. Typically, if SMC gives unexpected results, the modeler has to discover whether these come from actual characteristics of the system, or from bugs in the model. This is done in a black-box manner, only based on the obtained numerical values. We improve on this by using PM to get a white-box perspective on the dynamics of the system observed by SMC. Roughly speaking, we feed the samples generated by SMC to PM tools, obtaining a compact graphical representation of the observed dynamics. This mined PM model is then transformed into a mined QFLan model, making it accessible to PL engineers. Using two well-known PL models, we show that our methodology is effective (helps in pinpointing issues in models, and in suggesting fixes), and that it scales to complex models. We also show that it is general, by applying it to the security domain.
Stability in multiserver job queueing systems
Andrea Marin, University of Venice
We consider a multiserver queue where jobs request for a varying number of servers for a random service time. The requested number of servers is assigned to each job following a First-In First-Out (FIFO) order.
When the number of free servers is not sufficient to accommodate the next job in line, that job and any subsequent jobs in the queue are forced to wait. As a result, not all available servers are allocated to jobs if the next job requires more servers than are currently free. This queuing system is often called a Multiserver
Job Queuing Model (MJQM). We give the stability condition for the case of two classes of jobs: the first consists of jobs that only require one server, while the second includes jobs that require a all servers. Under exponential assumptions, the model can be easily descried with well-known formalisms, e.g., Markovian process algebras of stochastic Petri nets.
Finally, we discuss some open problems that could be studied with mean-field approaches.
About pros and cons of non-Markovian stochastic models in the development of quantitative analytics
Enrico Vicario, University of Florence
Quantitative evaluation of stochastic models plays a crucial role in the development and operation of a wide range of systems, applications, and processes, with notable applications in mastering the intertwined effects of concurrency and timing.
Validity and ease of generation of these models can now benefit from the growing spread of data technologies and data driven approaches, enabling combination of a-priori knowledge acquired by-design or by-contract together with parameters that are better learnt from observations.
To this end, representation of timings beyond the limit of memoryless Markovian behaviour largely improves the expressivity of models and their ability to retain observed or predefined characteristics. However, this also poses a hard challenge to make the same models amenable to efficient and tool-supported solution.
This talk will address recent results of our research on quantitative evaluation of non-Markovian models, and take this as an occasion for more general reflections about pros and cons of stochastic models with respect to fully data-driven approaches for the implementation of quantitative analytics.
Forward and backward backwards constrained bisimulations for Quantum Circuits
Antonio Jiménez-Pastor and Max Tschaikowski, Department of Computer Science Aalborg University, Denmark
Efficient methods for the simulation of quantum circuits on classic computers are crucial for their analysis due to the exponential growth of the problem size with the number of qubits. Here we study lumping methods based on bisimulation, an established class of techniques that has been proven successful for (classic) stochastic and deterministic systems such as Markov chains and ordinary differential equations. Forward-constrained bisimulation yields a lower-dimensional model which precisely preserves quantum measurements projected on a linear subspace of interest. Backward constrained bisimulation gives a reduction that is valid on a subspace containing the circuit input, from which the circuit result can be fully recovered. We provide an algorithm to compute the constraint bisimulations yielding coarsest reductions in both cases, using a duality result relating the two notions. As applications, we provide theoretical bounds on
the size of the reduced state space for well-known quantum algorithms for search, optimization, and factorization. Using a prototype implementation, we report significant reductions on a set of benchmarks. Furthermore, we show that constraint bisimulation complements state-of-the-art methods for the simulation of quantum circuits based on decision diagrams.
Non-deterministic observers in quantum bisimilarity
Lorenzo Ceragioli, University of Pisa
Past years have seen the development of a few proposals for quantum extensions of process calculi. The rationale is clear: with the development of quantum communication protocols, there is a need to abstract and focus on the basic features of quantum concurrent systems, like CCS and CSP have done for their classical counterparts. So far, though, no accepted standard has emerged, and the various proposals do not agree on what should be the observational properties of quantum values. As a matter of fact, the soundness of such properties has never been validated against the prescriptions of quantum theory. To this aim, we investigate the features of behavioural equivalences based on barbs and contexts, examining how bisimilarities relate to quantum theory. We show that the observational power of general contexts is incompatible with quantum theory: roughly, they can perform non-deterministic moves depending on quantum values without measuring (hence perturbing) them. Therefore, we refine the operational semantics in order to prevent contexts from performing unfeasible non-deterministic choices. This induces a coarser bisimilarity that better fits the quantum setting: (i) it lifts the indistinguishability of quantum states to the distributions of processes and, despite the additional constraints, (ii) it preserves the expressivity of non-deterministic choices based on classical information.
Conditioning Score-Based Generative Models by Neuro-Symbolic Constraints
Luca Bortolussi, University of Trieste
Score-based and diffusion models have emerged as effective approaches for both conditional and unconditional generation. Still conditional generation is based on either a specific training of a conditional model or classifier guidance, which requires training a noise-dependent classifier, even when the classifier for uncorrupted data is given. We propose an approach to sample from unconditional score-based generative models enforcing arbitrary logical constraints, without any additional training. Firstly, we show how to manipulate the learned score in order to sample from an un-normalized distribution conditional on a user-defined constraint. Then, we define a flexible and numerically stable neuro-symbolic framework for encoding soft logical constraints. Combining these two ingredients we obtain a general, but approximate, conditional sampling algorithm. We further developed effective heuristics aimed at improving the approximation. Finally, we show the effectiveness of our approach for various types of constraints and data: tabular data, images and time series.
Inference of Probabilistic Programs with Moment-Matching Gaussian Mixtures
Francesca Randone, IMT School of Advanced Studies in Lucca
Computing the posterior distribution of a probabilistic program is a hard task for which no one-fit-for-all solution exists. We propose Gaussian Semantics, which approximates the exact semantics of a bounded program by means of Gaussian mixtures. It is parametrized by a map that associates each program location with the moment order to be matched in the approximation. We provide two main contributions. The first is a universal approximation theorem stating that, under mild conditions, Gaussian Semantics can approximate the exact semantics arbitrarily closely. The second is an approximation that matches up to second-order moments analytically in face of the generally difficult problem of matching moments of Gaussian mixtures with arbitrary moment order. We test our second-order Gaussian approximation (SOGA) on a number of case studies from the literature.
We show that it can provide accurate estimates in models not supported by other approximation methods or when exact symbolic techniques fail because of complex expressions or non-simplified integrals. On two notable classes of problems, namely collaborative filtering and programs involving mixtures of continuous and discrete distributions, we show that SOGA significantly outperforms alternative techniques in terms of accuracy and computational time.
Join at: imt.lu/guinigi