Skip to main content

Model Checking Markov Chains Against Unambiguous Automata: The Quantitative Case

In my first post I promised to give a polynomial-time procedure to compute the probability that an infinite word (chosen uniformly at random) is accepted by a given unambiguous Büchi automaton. For example, let's take this Büchi automaton again:
Let's call the probability $x_1, x_2, x_3$, depending on the start state: \[ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} := \begin{pmatrix} \text{Pr(state $1$ accepts a random word)}\\ \text{Pr(state $2$ accepts a random word)}\\ \text{Pr(state $3$ accepts a random word)} \end{pmatrix} \] I showed in the first post that $x_1, x_2, x_3$ are nonzero in this example. In order to compute the $x_i$ we set up a linear system of equations and solve it. We have \[ x_1 \ = \ \frac12 \cdot x_{1,a} + \frac12 \cdot x_{1,b}\,, \] where $x_{1,a}$ and $x_{1,b}$ denote the (conditional) probabilities that state $1$ accepts a random word, under the condition that the word starts with $a$ and $b$, respectively.
  • $x_{1,b} = 0$ because state $1$ has no $b$-labelled outgoing transitions;
  • $x_{1,a} = x_1 + x_3$ because state $1$ has $a$-labelled transitions to states $1$ and $3$. You might be afraid that we are double-counting events here, but we are not: by unambiguousness, it is impossible that the same word, say $w$, is accepted by both states $1$ and $3$, because then state $1$ would accept the word $a w$ along two different paths.
In this way we obtain a linear system of equations for $x_1, x_2, x_3$: \[ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \ = \ T \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix}\,, \] where $T = \frac12 T(a) + \frac12 T(b)$ is the average of the transition matrices for the letters $a$ and $b$. We have three equations for three unknowns, but unfortunately the equations are dependent and do not have a unique solution. For instance, $x_1 = x_2 = x_3 = 0$ is a trivial solution.

The equation system above says that the probabilities $x_1, x_2, x_3$ are the entries of an eigenvector of $T$, corresponding to eigenvalue $1$. It follows from the Perron-Frobenius theorem for irreducible matrices that all such eigenvectors are scalar multiples of each other. So we just need to add one independent equation to the system in order to make the solution unique. What should that equation be?

I claim that this additional equation can be $x_1 + x_3 = 1$ or $x_2 = 1$ (either works). The relevant feature of $\{1,3\}$ and $\{2\}$ is that they can be reached from start state $1$ but cannot reach the empty set $\emptyset$ in the determinization of our automaton:
Here is why the equalities $x_2 = 1$ and $x_1 + x_3 = 1$ hold:
  • Clearly, we have $x_2 \le 1$. We also have $x_1 + x_3 \le 1$ because otherwise some word, say $w$, would be accepted by both states $1$ and $3$, and so state $1$ would accept the word $a w$ along two different paths.
  • These inequalities cannot be strict. Let's argue for $x_1 + x_3 \ge 1$. Recall that no word leads from $\{1,3\}$ to the empty set $\emptyset$. So for any finite word $v$, the probability that state $1$ or state $3$ accepts $v w$ (where $w$ is a random infinite word) is at least $\min\{x_1, x_2, x_3\}$, which is bounded away from $0$. Applying a general probability principle (Lévy's zero–one law) we can conclude that the probability that state $1$ or state $3$ accepts a random infinite word is $1$. Hence $x_1 + x_3 \ge 1$.
We conclude $x_1 + x_3 = 1$ and $x_2 = 1$. Combining one of these equations with the previous system we obtain: \[ \begin{aligned} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} &\ = \ \underbrace{ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix} }_T \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \\ x_1 + x_3 &\ = \ 1\,, \end{aligned} \] which has a unique solution: \[ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 1/2 \\ 1 \\ 1/2 \end{pmatrix} \]
In general, assume the states are $1, \ldots, n$. Without much loss of generality we can assume (as before) that every state is accepting and is reachable from every other state. Here is how to compute the probabilities $x_1, \ldots, x_n$:
  1. Determine in polynomial time if all $x_i$ are zero, as shown in the first post. If that's not the case, then all $x_i$ are nonzero, as the states can reach each other.
  2. Compute the average transition matrix $T$ and set up the linear system \[ \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix} \ = \ T \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}\,. \]
  3. Compute a set of states $\alpha \subseteq \{1, \ldots, n\}$ that can be reached from a single state but cannot reach the empty set $\emptyset$ in the determinization. I explained in the previous post how this can be done in polynomial time, without computing the determinization. Add to the linear system the extra equation \[ x_{i_1} + \cdots + x_{i_k} = 1\,, \quad \text{where }\{i_1, \ldots, i_k\} = \alpha\,. \]
  4. Solve the linear system.
The whole procedure can be generalized to the case where the random word is produced by a Markov chain:
Given a Markov chain and an unambiguous Büchi automaton, one can compute in polynomial time the probability the automaton accepts a word randomly generated by the Markov chain.
Using an unpublished result (modifying step 3. above), one can sharpen the theorem by replacing "polynomial time" with the complexity class "NC", which captures problems that can be efficiently solved in parallel. This leads to an automata-theoretic PSPACE procedure for model-checking Markov chains against LTL specifications, which is optimal.

The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself.

Comments

Popular posts from this blog

Tell me the price of memory and I give you €100

Markov Decision Processes (MDPs) are Markov chains plus nondeterminism: some states are random, the others are controlled (nondeterministic). In the pictures, the random states are round, and the controlled states are squares: The random states (except the brown sink state) come with a probability distribution over the successor states. In the controlled states, however, a controller chooses a successor state. What does the controller want to achieve? That depends. In this blog post, the objective is very simple: take a red transition . The only special thing about red transitions is that the controller wants to take them. We consider only MDPs with the following properties: There are finitely many states and transitions. The MDP is acyclic, that is, it has no cycles. There are a unique start state from which any run starts (in the pictures: blue, at the top) and a unique sink state where any run ends (in the pictures: brown, at the bottom). No matter what the controller does,

Calibrating Assessments

Last year I wrote a post on how Oxford selects undergraduate students for computer science. Oxford undergraduate admissions are politically charged: the British public suspects that Oxbridge is biased towards privileged applicants. This post is unpolitical though. I'm advocating a better way of solving a typical problem in academia: you have many candidates (think: papers, grant proposals, or Oxford applicants) and few assessors (think: members of program committees, assessment panels, or admission tutors), and the goal of the assessors is to select the best candidates. Since there are many candidates, each assessor can assess only some candidates. From a candidate's point of view: a paper may get 3 reviews, a grant proposal perhaps 5, and a shortlisted Oxford applicant may get 3 interviews. Each such review comes with a score that reflects the assessor's estimate of the candidate's quality. The problem is that different assessors will score the same candidate differ

Short Killing Words

Given a finite set $\mathcal{M}$ of $n \times n$ matrices over the integers, can you express the zero matrix as a product of matrices in $\mathcal{M}$? This is known as the mortality problem. Michael Paterson showed in 1970 that it is undecidable, even for $n=3$. Later it was shown that mortality remains undecidable for $n=3$ and $|\mathcal{M}| = 7$, and for $n=21$ and $|\mathcal{M}| = 2$. Decidability of mortality for $n=2$ is open. If all matrices in $\mathcal{M}$ are nonnegative, the mortality problem becomes decidable, because then it matters only whether matrix entries are $0$ or not. Specifically, view the given matrices as transition matrices of a nondeterministic finite automaton where all states are initial and accepting, and check whether there is a word that is not accepted. This problem is decidable and PSPACE-complete. One can construct cases where the shortest word that is not accepted by the automaton has exponential length. In this post we focus on the nonnegative