A *Markov chain* is a discrete stochastic process with discrete
states and discrete transformations between them. At each time
instant the system is in one of the possible states, numbered from
one to . At regularly spaced discrete times, the system switches
its state, possibly back to the same state. The initial state of the
chain is denoted and the states after each time of change are
. Standard first order Markov chain has the
additional property that the probabilities of the future states depend
only on the current state and not the ones before
it [48]. Formally this means that

This is called the

Because of the Markov property, the complete probability distribution of the states of a Markov chain is defined by the initial distribution and the state transition probability matrix

Let us denote and . In the general case the transition probabilities could be time dependent, i.e. , but in this thesis only the time independent case is considered.

This allows the evaluation of the probability of a sequence of states , given the model parameters , as