The two subset recurrent property of Markov chains


This paper proposes a new type of recurrence where we divide the Markov chains into intervals that start when the chain enters into a subset A, then sample another subset B far away from A and end when the chain again return to A. The length of these intervals have the same distribution and if A and B are far apart, almost independent of each other. A and B may be any subsets of the state space that are far apart of each other and such that the movement between the subsets is repeated several times in a long Markov chain. The expected length of the intervals is used in a function that describes the mixing properties of the chain and improves our understanding of Markov chains.
The paper proves a theorem that gives a bound on the variance of the estimate for π(A), the probability for A under the limiting density of the Markov chain. This may be used to find the length of the Markov chain that is needed to explore the state space sufficiently. It is shown that the length of the periods between each time A is entered by the Markov chain, has a heavy tailed distribution. This increases the upper bound for the variance of the estimate π(A).
The paper gives a general guideline on how to find the optimal scaling of parameters in the Metropolis-Hastings simulation algorithm that implicit determine the acceptance rate. We find examples where it is optimal to have a much smaller acceptance rate than what is generally recommended in the literature and also examples where the optimal acceptance rate vanishes in the limit.