Fig 1 - uploaded by Bernhard C. Geiger
Content may be subject to copyright.
The transition graph of an irreducible, aperiodic Markov chain with alphabet X = {1, 2, 3, 4}. The partition indicated by the red boxes induces a lumping function g, with g(1) = g(2) = 1 ′ and g(3) = g(4) = 2 ′. While g is not invertible, side information about the previous state allows to determine the current state given only the lumped state: If the previous state is 1 and the current lumped state is 2' (box on the left), only state 3 is realizable. 

The transition graph of an irreducible, aperiodic Markov chain with alphabet X = {1, 2, 3, 4}. The partition indicated by the red boxes induces a lumping function g, with g(1) = g(2) = 1 ′ and g(3) = g(4) = 2 ′. While g is not invertible, side information about the previous state allows to determine the current state given only the lumped state: If the previous state is 1 and the current lumped state is 2' (box on the left), only state 3 is realizable. 

Contexts in source publication

Context 1
... for lossless lumpings, i.e., where the original Markov chain and the lumped process have equal entropy rates. Specifically, the single entry property we define in [3, Def. 3] holds if, given the previous state of the Markov chain, in the preimage of the current lumped state only a single state is realizable, i.e., has positive probability (see Fig. ...
Context 2
... Section III, we use these graph-theoretic approaches to find lossless lumpings for a given Markov chain. While the current state of the Markov chain cannot be inferred from its lumped image only, we require that it can be re- constructed by using the previous state of the Markov chain as side information (cf. Fig. 1). The lumpings fulfilling this requirement correspond to the possible clique partitions of a graph derived from the Markov chain. The method is universal in the sense that it only depends on the presence, but not the precise magnitude, of state transitions of the Markov chain. In Section IV, we relax the problem and reduce the output ...
Context 3
... characteristic graph of a Markov chain connects two states, if every state can only access one of them. Since the Markov chains considered in this work are irreducible, the invariant distribution vector is positive and Definition 3 coincides with Definition 2 for a source X n with side information X n−1 . Example 1. Consider the Markov chain in Fig. 1. Its charac- teristic graph has edge set E X = {{1, 2}, {3, 4}}. Both edges are cliques, and together they partition X ...
Context 4
... 2. Consider the Markov chain in Fig. 1 and as- sume that all transitions have probability 0.5. By symmetry, it follows that H(X n ) = log N = log 4 and ¯ H(X) = log M = log 2. The output alphabet size is optimal in terms of Proposition 2: H(Y n ) = ¯ H(Y) = log M = log d max = log ...

Citations

Conference Paper
We use results from zero-error information theory to determine the set of non-injective functions through which a Markov chain can be projected without losing information. These lumping functions can be found by clique partitioning of a graph related to the Markov chain. Lossless lumping is made possible by exploiting the (sufficiently sparse) temporal structure of the Markov chain. Eliminating edges in the transition graph of the Markov chain trades the required output alphabet size versus information loss, for which we present bounds.