To my (basic) understanding a Markov process is a process wherein the future sta
ID: 1373429 • Letter: T
Question
To my (basic) understanding a Markov process is a process wherein the future state of a system only depends on the current state, and not on the past states of the system.
I was wondering on what the standard approach is on systems that inherently hold information about their past in them.
One could image, in the discrete-time case, system carrying in it a representation of it's three previous states. There are two cases to distinguish here:
The future state of the system is not a function of this memory and only of factors that don't have anything to do with it. This is perhaps a boring case (although there might be interesting examples I don't know of).
The future state of the system is a function of this memory information, yet in a "Markovian" way. This is the case I am most interested in.
Are these things that would be considered as Markovian systems (perhaps on a different scale)? Are there examples of these kinds of systems?
Explanation / Answer
Yes. The same system can - at least in many cases - be described by either a stochastic process with memory or by a Markov process. The point is that in order to write it as a Markov process, one must add enough variables encoding the memory. For example, an autoregressive moving average (ARMA) process is defined as a process with memory, but each such process has an equivalent representation as a linear state space model, which is Markovian.
Thus, from a practical point of view, it is a matter of convenience, or of what you are trying to achieve.
If you want a minimal representation with the fewest number of variables, you usually need memory, while if you want a Marovian model (often the more flexible and computationally more attractive choice), you need a big model.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.