Markov Model


What is a Markov Model?
Markov models are some of the most powerful tools available to engineers and scientists for analyzing complex systems. This analysis yields results for both the time dependent evolution of the system and the steady state of the system.

For example, in Reliability Engineering, the operation of the system may be represented by a state diagram, which represents the states and rates of a dynamic system. This diagram consists of nodes (representing a possible state of the system, which is determined by the states of the individual components & sub-components) connected by arrows (representing the rate at which the system operation transitions from one state to the other state). Transitions may be determined by a variety of possible events, for example the failure or repair of an individual component. A state-to-state transition is characterized by a probability distribution. Under reasonable assumptions, the system operation may be analyzed using a Markov model.
A Markov model analysis can yield a variety of useful performance measures describing the operation of the system. These performance measures include the following:

  • system reliability
  • availability
  • mean time to failure (MTTF)
  • mean time between failures (MTBF)
  • the probability of being in a given state at a given time
  • the probability of repairing the system within a given time period (maintainability)
  • the average number of visits to a given state within a given time period
and many other measures.

The name Markov model is derived from one of the assumptions which allows this system to be analyzed; namely the Markov property. The Markov property states: given the current state of the system, the future evolution of the system is independent of its history. The Markov property is assured if the transition probabilities are given by exponential distributions with constant failure or repair rates. In this case, we have a stationary, or time homogeneous, Markov process. This model is useful for describing electronic systems with repairable components, which either function or fail. As an example, this Markov model could describe a computer system with components consisting of CPUs, RAM, network card and hard disk controllers and hard disks.
The assumptions on the Markov model may be relaxed, and the model may be adapted, in order to analyze more complicated systems. Markov models are applicable to systems with common cause failures, such as an electrical lightning storm shock to a computer system. Markov models can handle degradation, as may be the case with a mechanical system. For example, the mechanical wear of an aging automobile leads to a non-stationary, or non-homogeneous, Markov process, with the transition rates being time dependent. Markov models can also address imperfect fault coverage, complex repair policies, multi-operational-state components, induced failures, dependent failures, and other sequence dependent events.

Comments

shivlu jain said…
really good one

Popular posts from this blog

L2TPv3 Enables Layer 2 Services for IP Networks

TCP/IP 明確擁塞通知 (ECN)

Q-in-Q(Dot1Q Tunnel) Sample Configuration