Skip to main content

Posts

Infinitely many states and no memory

Markov Decision Processes (MDPs) are Markov chains plus nondeterminism: some states are random, the others are controlled (nondeterministic). In the pictures, the random states are round, and the controlled state are squares:
The random states come with a distribution over successor states, but in the controlled states a controller chooses a successor state (or a probability distribution over the successor states). For instance, the controller could stay on the leftmost column forever, by always choosing to go one state down. Or the controller could go right at some point; in the random state a successor is picked randomly, either the initial state or the state on the right, according to the blue probabilities.

What objective should the controller aim at? In this post, the objective will be the following: visit green states infinitely often, and red states only finitely often. Here is the previous MDP with colours:
Since the controller wants to visit (infinitely many) green states…
Recent posts

Model Checking Markov Chains Against Unambiguous Automata: The Quantitative Case

In my first post I promised to give a polynomial-time procedure to compute the probability that an infinite word (chosen uniformly at random) is accepted by a given unambiguous Büchi automaton. For example, let's take this Büchi automaton again:
Let's call the probability $x_1, x_2, x_3$, depending on the start state: \[ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} := \begin{pmatrix} \text{Pr(state $1$ accepts a random word)}\\ \text{Pr(state $2$ accepts a random word)}\\ \text{Pr(state $3$ accepts a random word)} \end{pmatrix} \] I showed in the first post that $x_1, x_2, x_3$ are nonzero in this example. In order to compute the $x_i$ we set up a linear system of equations and solve it. We have \[ x_1 \ = \ \frac12 \cdot x_{1,a} + \frac12 \cdot x_{1,b}\,, \] where $x_{1,a}$ and $x_{1,b}$ denote the (conditional) probabilities that state $1$ accepts a random word, under the condition that the word starts with $a$ and $b$, respectively.
$x_{1,b} = 0$ because state $1$ has…

Unambiguous Automata: From PSPACE to P

Computer science is about leveraging mathematical structure, for instance, in order to accelerate algorithms. More often than we'd like we can't accelerate much: For instance, checking two NFAs for language inclusion or equivalence is PSPACE-complete. Even the problem whether an NFA accepts all words is PSPACE-complete. For DFAs, those problems are easy. This means that for NFAs we can't do much better than determinizing (using the subset construction and incurring an exponential blowup) and solving the resulting DFA problem.

In this post I'll discuss an automata problem that seems to call for determinization and is, in fact, PSPACE-complete for NFAs. But there is an efficient algorithm for unambiguous automata.

Here is an example of an unambiguous automaton:
The notion of accepting states is not used in this post. By unambiguousness I just mean that the automaton has no diamonds, that is, for any states $q, r$, any word $w$ labels at most one path from $q$ to $r$. (…

Model Checking Markov Chains Against Unambiguous Automata: The Qualitative Case

This is my first blog post. The purpose of this blog is to describe some neat ideas from my research. Ideally only one idea per post, which explains the title of this blog.

Today I start with a classical topic from probabilistic verification: model checking Markov chains against $\omega$-regular specifications. I will focus on specifications given by Büchi automata, which are automata like this one:

This Büchi automaton accepts all infinite words over $\{a,b\}$ that start with $a$. The automaton is nondeterministic. The automaton is also unambiguous, which means that every infinite word has at most one accepting run.

A Markov chain, on the other hand, generates infinite words. A Markov chain looks like this:

For instance, with probability $\frac12 \cdot \frac12 \cdot 1 = \frac14$ it generates an infinite word that starts with $a b a$. For simplicity, we consider only a very simple Markov chain in this post:

This Markov chain generates an infinite word over $\{a,b\}$ uniformly at r…