29 May, 2022
-----------------------------------------------------
                Frequentist          Bayesian  
--------------  ------------         ----------------
Probability     Long-run frequency   Degree of belief
                $P(D|H)$             $P(H|D)$
Parameters      Fixed, true          Random, 
                                     distribution
Obs. data       One possible         Fixed, true
Inferences      Data                 Parameters
-----------------------------------------------------
 
 
| 
 
n: 10  | 
 
n: 10  | 
 
n: 100  | 
 
 
| Population A | Population B | |
|---|---|---|
| Percentage change | 0.46 | 45.46 | 
| Prob. >5% decline | 0 | 0.86 | 
\[ \begin{aligned} P(H\mid D) &= \frac{P(D\mid H) \times P(H)}{P(D)}\\[1em] \mathsf{posterior\\belief\\(probability)} &= \frac{likelihood \times \mathsf{prior~probability}}{\mathsf{normalizing~constant}} \end{aligned} \]
\[ \begin{aligned} P(H\mid D) &= \frac{P(D\mid H) \times P(H)}{P(D)}\\ \mathsf{posterior\\belief\\(probability)} &= \frac{likelihood \times \mathsf{prior~probability}}{\mathsf{normalizing~constant}} \end{aligned} \]
The normalizing constant is required for probability - turn a frequency distribution into a probability distribution
| 
 | 
\(P(D\mid H)\) | 
subjectivity?
intractable \[P(H\mid D) = \frac{P(D\mid H) \times P(H)}{P(D)}\]
\(P(D)\) - probability of data from all possible hypotheses
Marchov Chain Monte Carlo sampling
Marchov Chain Monte Carlo sampling
Marchov Chain Monte Carlo sampling
Marchov Chain Monte Carlo sampling
Marchov Chain Monte Carlo sampling
Marchov Chain Monte Carlo sampling
Marchov Chain Monte Carlo sampling
Metropolis-Hastings
Gibbs
NUTS