I'm attempting to model probability for a finite state machine. The probability of the next state depends on the current state. However, I'd like to handle the case where I don't know the starting state.
Originally I just averaged all the output values, but I realized they'd need to be weighted by the probability of being in a given system. I feel like I can construct a system of linear equations, but I'm not as comfortable in this area of mathematics (also, please correct me if I'm making a faulty assumption or saying the wrong thing).
To hopefully help with my explanation, I'll try a simple case. Say there are two states, A and B.
State A
0.8: A
0.2: B
State B
0.5: A
0.5: B
In this system, B is clearly the less likely state, but what is the probability of going into B from an unknown state? Is this the same thing as asking: given no information, what is the probability we are in state B? Am I wrong and it is as simple as averaging the probabilities?
Any help you can offer is appreciated (or directions to find the answer on my own)
EDIT for more details:
Specifically I'm create a computer program that calculates the likelihood of being in a certain state based on discrete observable results (using a simple Bayesian calculation). However, since this is fundamentally a state machine (and therefore the previous state affects the likelihood of the current state), I'm trying to get a set of values that get a general probability distribute of each state when I don't have the information for last state.
It didn't occur to me that it'd be helpful, but I do know which state the system starts on, though I don't know how many iterations through the system have occurred.