Hacker Newsnew | past | comments | ask | show | jobs | submit | kernel_sanders's commentslogin

SageMaker

{1G}

Human Druid - Sage

{G}, Tap: Create 0/1 Plant Token named Seed of Knowledge

Sacrifice {X} Plants: Look at the top X cards of opponent's library

1/1


I thought MTG too.


I made my own simple multiplayer 2d game to understand this stuff and just want to let you know that your site was invaluable to understanding client side prediction and server side reconciliation! Thanks so much for that!


How about airborndocs?


Yeah, I don't understand this. I read the FTC complaint and it hinged on the issue of collecting the data w/o explicit user consent, ie a pop up or message. Wouldn't nearly all internet companies fall in to this bucket? Ie, log aggregators, analytics SDKs etc?


This is how I got in to MtG, and can confirm that its super fun. If you're lucky enough to find a group of friends that will participate, don't let the opportunity pass you by.


This is one of my favorite DF stories bar none. The last time you posted it here and I read it, I laughed so hard and showed it to so many people.

Since then, I lost the link and forgot the terms to search for it! I'm so glad you reposted it. =)


Same for us


Can't launch instances in EC2 in US-East-1 at the moment.


It appears EC2 is affected as well now:

12:51 AM PDT We are investigating increased error rates for the EC2 APIs and launch failures for new EC2 instances in the US-EAST-1 Region.


Probably just fallout from the fact that EBS snapshots are stored in S3. If you can't create an EBS volume, you won't be able to launch an EC2 instance from it.


It appears that s3 based AMIs totally fail to launch as unavailable as well.


Thank you for the clear explanation.

In a previous comment, I sought to further understand how the Lottery possesses the Markov property. Based on your definition above, I can see that it does simply because the distribution X_t of winning numbers has the same dependence on X_{t-1} as it does on X_1, ..., X_{t-1}, that is, zero. Do I have that correct?


Yes, exactly. More generally (and for the same reason), any sequence of iid random variables will form a Markov chain.

For an example without the Markov property, consider the sequence of random variables X_1, X_2, ... with X_1 being either -1 or 1 with equal probability, and X_t being normally distributed with mean X_1 and standard deviation 1.

Knowing the history X_1, X_2, ..., X_{t-1} gives you the exact distribution of X_t (since you know X_1), while only knowing X_{t-1} gives you much less information. This fails to be a Markov chain because the state X_{t-1} doesn't "remember" which of the two possible distributions is being used.


Awesome. Thanks! Again, very clear. This is interesting stuff. It makes me curious about applications of stochastic processes in general - time to read the course notes you linked. They look like fun problems to program and model.

Edit: Wikipedia, of course, has a good overview of applications of Markov chains. http://en.wikipedia.org/wiki/Markov_chain#Applications


I'm interested in the above article and the above two comments, however, I don't understand the Lottery example. Can you clarify how it does have the Markov property?

I'm not seeing how the distribution of possible winning numbers relates at all to the current state. I'm trying to phrase this in the language of the above two comments. Help me out if I've got it all wrong. =)


The Markov property is that you can model the system as being dependent on the immediately previous state + noise.

The lottery ignores the previous state and is defined purely by the noise, so it is a (trivial) markov process.

More complex systems depend on the entire history (e.g. to model a poker player you have to consider all of their actions up to the current). Newtonian systems are markov, if you know the state of the system you can run it forward in time deterministically. Even if your knowledge of the state of the Newtonian system is not fully known, you can still run the distribution of states forward in time precisely.

current_state = previous_state + process_noise

typically expressed in matrix math but the idea is as simple as that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: