I made my own simple multiplayer 2d game to understand this stuff and just want to let you know that your site was invaluable to understanding client side prediction and server side reconciliation! Thanks so much for that!
Yeah, I don't understand this. I read the FTC complaint and it hinged on the issue of collecting the data w/o explicit user consent, ie a pop up or message. Wouldn't nearly all internet companies fall in to this bucket? Ie, log aggregators, analytics SDKs etc?
This is how I got in to MtG, and can confirm that its super fun. If you're lucky enough to find a group of friends that will participate, don't let the opportunity pass you by.
Probably just fallout from the fact that EBS snapshots are stored in S3. If you can't create an EBS volume, you won't be able to launch an EC2 instance from it.
In a previous comment, I sought to further understand how the Lottery possesses the Markov property. Based on your definition above, I can see that it does simply because the distribution X_t of winning numbers has the same dependence on X_{t-1} as it does on X_1, ..., X_{t-1}, that is, zero. Do I have that correct?
Yes, exactly. More generally (and for the same reason), any sequence of iid random variables will form a Markov chain.
For an example without the Markov property, consider the sequence of random variables X_1, X_2, ... with X_1 being either -1 or 1 with equal probability, and X_t being normally distributed with mean X_1 and standard deviation 1.
Knowing the history X_1, X_2, ..., X_{t-1} gives you the exact distribution of X_t (since you know X_1), while only knowing X_{t-1} gives you much less information. This fails to be a Markov chain because the state X_{t-1} doesn't "remember" which of the two possible distributions is being used.
Awesome. Thanks! Again, very clear. This is interesting stuff. It makes me curious about applications of stochastic processes in general - time to read the course notes you linked. They look like fun problems to program and model.
I'm interested in the above article and the above two comments, however, I don't understand the Lottery example. Can you clarify how it does have the Markov property?
I'm not seeing how the distribution of possible winning numbers relates at all to the current state. I'm trying to phrase this in the language of the above two comments. Help me out if I've got it all wrong. =)
The Markov property is that you can model the system as being dependent on the immediately previous state + noise.
The lottery ignores the previous state and is defined purely by the noise, so it is a (trivial) markov process.
More complex systems depend on the entire history (e.g. to model a poker player you have to consider all of their actions up to the current). Newtonian systems are markov, if you know the state of the system you can run it forward in time deterministically. Even if your knowledge of the state of the Newtonian system is not fully known, you can still run the distribution of states forward in time precisely.
current_state = previous_state + process_noise
typically expressed in matrix math but the idea is as simple as that.
{1G}
Human Druid - Sage
{G}, Tap: Create 0/1 Plant Token named Seed of Knowledge
Sacrifice {X} Plants: Look at the top X cards of opponent's library
1/1