JUMP MARKOV CHAIN MONTE CARLO ALGORITHMS FOR BAYESIAN INFERENCE IN HIDDEN MARKOV MODELS Tobias Rydén, Matematisk Statistik i Lund A hidden Markov model (HMM) is a bivariate stochastic process {(X_k,Y_k)} such that (i) {X_k} is a finite state Markov chain and (ii) given {X_k}, the process {Y_k} is a sequence of conditionally independent random variables with the conditional distribution of Y_n depending on X_n only. The chain {X_k} is generally not observable, hence the word `hidden', so that inference has to be based on {Y_k} alone. HMMs have during the last decade become widely spread for modelling sequences of weakly dependent random variables with applications in areas like speech processing, communication networks, biochemistry, biology, medicine, econometrics, environmetrics, etc. Sometimes the hidden Markov chain {X_k} does indeed exist, so that the physical nature of the problem suggests the use of an HMM, in other cases HMMs just provide a good fit to data. One of the most difficult problems in HMM inference is to estimate the number of states, d say, of {X_k}. Classical approaches to this problem include likelihood ratio tests and penalized likelihoods (AIC/BIC). In this talk we present a Bayesian approach: by placing a prior on the unknown d we obtain a posterior distribution for d and the other parameters of the model. This distribution is analytically untractable but can be explored using jump Markov chain Monte Carlo algorithms.