We consider adaptive importance sampling for a Markov chain with scoring. It is shown that convergence to the zero-variance importance sampling chain for the mean total score occurs exponentially fast under general conditions. These results extend previous work in Kollman (1993) and in Kollman et al. (1999) for finite state spaces.