by Sebastian Fischer
Last Updated January 16, 2018 10:19 AM

Assume that we toss a coin n times and the result looks like this

HTHTHTHTHTHTHTHT... . Would a (Probability-) Frequentist conclude that that the probability for the coin to land heads is 0.5? It seems to be the long run frequency, but on the other hand it is not really random whether the coin lands heads or tails, which makes me kind of reluctant to talk about probabilities in this case. As P(Heads|uneven) = 1 and P(Heads|even) = 0 the example strengthens my intuition, that probabilities reflect our knowledge and reasoning abilities rather than some objective property (which is how I believe Frequentists interprete probabilities). Even if we assume that the long run frequency exists, it is not this fact, that we attribute a 0.5 probability to the above example but our lack of knowledge whether it is an even or uneven try.

EDIT: While all the answers were helpful I want to add that my focus is more on a philosophical level. Frequentists define probabilities as long run frequencies, the above example is a long run frequency, the randomness/predictability of the experiment however depends on the information state of the observer. Does this imply that probability is subjective?

A frequenist will inspect the data and see the patterns that you discuss.

After that she will adopt an appropriate model which is a markov chain and evaluate its properties.

This has no direct link to Bayesian vs frequenist.

No one, including frequentists, can say anything without assuming a model. If you assume a model, such as a simple "biased coin toss" $Y_i \sim \text{Bernoulli}(p)$, then you can talk about probabilities. In that case, you can infer about $p$, such as that it's "probably" (here things start to differ) close to 0.5. But you can also come up with other models, such as the "alternating" one you are proposing (but it's not really specified).

Here, a statistician typically uses an information criterion to decide on the model. These compare the number of parameters and the likelihood of this data to see how efficient the data is explained. A Bayesian may look at a posterior predictive check. A machine learner might see with cross-validation that the "alternating" model is much better at prediction. Not to say that any of these approaches are better, but to say that they all need a model.

Independent on whether you are a Baysian or a Frequentist - you first have to build a model of what you are investigating. The usual approach to the question of probabilities of heads and tails of a coin would be a binomial test under the assumption of i.i.d. observations. Now in your case, the observations are clearly not independend but dependend on the toss before the next one. Model violations are model violations, independent on Bayesianism or Frequentism. Sometimes, however, Bayesian approaches make it more easy to formulate individual models. Autocorrelation is a term known to Frequentists.

HTHTHT is a perfectly valid sequence of coin tosses, it is not more probable the HHHHHH, or HHTHTH etc. The fact that it doesn’t “look” random, does not make it non-random. Check the *Thinking, Fast and Slow* book by David Kahneman, who discusses this kind of biases.

Moreover, even if the process is deterministic, then if you picked the random “flip” from the sequence, the probability of seeing heads would be 1/2, since exactly half of the flips would be heads.

The frequentist “long run” argument is not about observing actual long run sequence of events, but it is an *interpretation* of probability. When frequentist thinks of some process *as of* random variable, she considers what would be the “long run” stochastic process behind it. This is an interpretation, not the statement that it actually behaves like this. For example, a sequence of coin flips is a deterministic process that can be described in terms of physics, there is nothing “random” or “i.i.d.” about it, but interpreting it like this makes a nice, simple statistical model.

See also Bayesian vs frequentist Interpretations of Probability

- ServerfaultXchanger
- SuperuserXchanger
- UbuntuXchanger
- WebappsXchanger
- WebmastersXchanger
- ProgrammersXchanger
- DbaXchanger
- DrupalXchanger
- WordpressXchanger
- MagentoXchanger
- JoomlaXchanger
- AndroidXchanger
- AppleXchanger
- GameXchanger
- GamingXchanger
- BlenderXchanger
- UxXchanger
- CookingXchanger
- PhotoXchanger
- StatsXchanger
- MathXchanger
- DiyXchanger
- GisXchanger
- TexXchanger
- MetaXchanger
- ElectronicsXchanger
- StackoverflowXchanger
- BitcoinXchanger
- EthereumXcanger