How to deal with uncertainty, think in probabilities and work with random variables

```
var iframer = require('./iframer')
```

All models discussed before were basically regular programs. They take input and produce deterministic output. Deterministic here means no matter how many times you rerun the model, or repeat the experiment, you get the same output for same input. Determinstic models are very useful, however in real-world we ofter deal with uncertainty. It means that each time you can't be 100% sure about the outcome, even knowing all the input parameters. The good news is we know how to reason and work with uncertainty - using probability theory. The bad news is probability theory is not really obvious and has some caveats.

## What the heck is probability¶

The main problem with studying probability theory is actually understanding what is actually a probability. It seems like an obvious thing to define, but it's not.

## Probability as a frequency¶

A lot of books about probability theory, especially the old ones, contain only one definition of probability as the limiting frequency of an event. In those books you can find canonical examples with non-biased coins or 52 playing cards. Probability in such cases described as a frequency. In other words how many times an event appear compared to total number of experiment repetitions, assuming the experiment is repeatedly performed under the same conditions infinitely many times. Mathematically such concept can be formulated as

$$P(Event) = \lim_{N \to \infty} \frac{N\tiny Event}{N}$$## Probability as a measure of belief¶

There're events we can't reason about in terms of frequency. They are too rare or have happened in the past. It's just impossible to use the formula below to describe their uncertainty, however using the probability theory still seems like a good choice to work with such events. Let's use one of the basic examples - throwing a coin. Assume that we performed infinitely big number of experiments with that coins and know that the coin is not biased or in a frequentist sence probability of falling for each side top is 1/2. After throwing the coin one player looks at the result and see that it's Heads. For another player the result is still hidden. What is the probability of the result being Heads in such case?

## Kolmogorov's probability¶

Both point of views on the probability as a frequency or as a measure of belief are very natural and easy to stick to. However they both have their own drawbacks. Mathematically stronger and unifying definition of probability was proposed by Kolmogorov. Kolmogorov's probability is just a value that satisfies 3 axioms:

- $ 0 \leq P(E) \leq 1 $, where E is event
- $ P(S) = 1 $, where S is a sample space
- $ P \left( \bigcup\limits_{i=1}^{\infty} E_{i}\right) = \sum\limits_{i=1}^{\infty} P(E_{i})$

In other words, we assume that for each event E from the sample space there exist a number P(E) called the probability of event E, such that:

- It can't be less than 0 or more than 1
- Probability of sample space equals to 1
- Probability of appearing of at least one event from a sequence of mutually exclusive events is just the sum of their probabilities

Such axiomatic approach satisfies both definitions. Read more

## Random variables¶

Sometimes outcomes of an experiment is a numerical value, sometimes it's not but it can be trasformed to a number. Some transformations can be more interesting that actual event and it value. When tossing 2 dice in a board game, it's more important to know the sum of dice not the individual values. A quantity, or more formally a function, that assigns a numerical value to each outcome of the experiment called a random variable. Because random variables are based on random outcomes we can also use probability theory to work with them. In modeling it's much easier to deal with random variables rather than raw events.

See also:

## Probability distributions¶

Probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. source

When incorporating uncertanty in a model, you'll definetely use one or another probability distribution to describe a random phenomenon. The same as random variables, probability distributions are divided on two groups - discrete and continuous. To characterize discrete distributions probability mass function is used. In continuous case - probability density function.

The main difference between the two is PMF defining probability that a discrete random variable is exactly equal to some value. All the PMF values are non-negative and less than 1. In contrast PDF at any point is not the probability of random variable being equal to that value (probability of getting one value is always equal to `0`

in continuous case, why?). It rather specifies the probability of the random variable falling within a range of values. To get the actual probability of falling in specific range PDF need to be intergrated.

See more:

## Probabilistic models¶

Probabilistic (stochastic) model is a model that includes one or more random variables. It can look similar to a determinstic model, however incorporating random variables makes the output also random. Frequently the goal is to estimate the probability distribution of the output or in case of bayesian inference. This can be achieved with large number of simulations by Monte Carlo method.