{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 2, Exercise 1: Airline meal" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An airline serves two kinds of meal on their long-haul flights: beef or chicken. They want to know the average food preference of their customers and to optimize the number of meals to load on each flight accordingly.\n", "A survey on the choices of customers in previous flights reveals that, out of 4,380 customers, 1,807 people chose beef, while 2,573 people chose chicken.\n", "\n", "Let's model the probability of a future customer taken at random to choose beef as a stochastic process with probability $p_{beef}$.\n", "\n", "1. Based on the supplied data, what is the posterior probability distribution of the parameter $p_{beef}$, $P(p_{beef}|data)$? Plot this distribution.\n", "2. Suppose, for simplicity, that we know the value of $p_{beef}$. What is the smallest number of beef meals the airline should load on a 219 seat flight to be 99% sure that every customer who wants beef gets it?\n", "3. Now relax the assumption of a known value of $p_{beef}$ and answer the question above while marginalizing over all possible values of $p_{beef}$, as found in part 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 1\n", "\n", "Let's set $n_b = 1807$, $n_c = 2573$.\n", "Let's assume a uniform prior between 0 and 1 for $p_{beef}$, $P(p_{beef}) = U(0,1)$.\n", "The likelihood is given by the binomial distribution:\n", "\n", "$P(n_b,n_c|p_{beef}) \\propto p_{beef}^{n_b}(1-p_{beef})^{n_c}$\n", "\n", "Since we chose a flat prior, the posterior is proportional to the above expression, as a function of $p_{beef}$.\n", "\n", "$P(p_{beef}|n_b,n_c) \\propto p_{beef}^{n_b}(1-p_{beef})^{n_c}$\n", "\n", "(This is also proportional to a Beta distribution in $p_{beef}$, $Beta(n_b+1,n_c+1)$)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pylab\n", "\n", "\n", "# define the data\n", "n_b = 1807\n", "n_c = 2573\n", "\n", "# creates array of p_beef\n", "p_beef = np.linspace(0., 1., 1001)\n", "\n", "# calculate the posterior over the array. \n", "# In order to avoid numerical problems caused by the very large exponents,\n", "# first calculate the log of the posterior, subtract the maximum value, then take the exp\n", "logp = n_b*np.log(p_beef) + n_c*np.log(1. - p_beef)\n", "logp -= logp.max()\n", "ppost = np.exp(logp)\n", "\n", "pylab.plot(p_beef, ppost)\n", "pylab.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 2\n", "\n", "Our best guess for the true value of $p_{beef}$ (assuming there is one), is\n", "\n", "$p_{guess} = \\dfrac{n_b}{n_b + n_c}$\n", "\n", "To find the 99%-ile of the distribution $P(n_{b1})$, where $n_{b1}$ is the number of customers that choose beef on a 219-seat flight, we can proceed as follows:\n", "\n", "- For a large number of iterations, simulate values of $n_{b1}$ from a binomial distribution with 219 \"tries\" and probability $p_{guess}$\n", "- Find the 99%-ile of the simulated values of $n_{b1}$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_guess = float(n_b)/(float(n_b+n_c))\n", "nseat = 219\n", "\n", "n_sim = 10000\n", "\n", "nb1_sim = np.zeros(n_sim)\n", "\n", "for i in range(n_sim):\n", " # quick and dirty way to draw from a binomial distribution:\n", " # draw random numbers from a uniform distribution. The values smaller than p_guess correspond to beef choices\n", " # count the number of beef\n", " x = np.random.rand(nseat)\n", " nb1_here = (x < p_guess).sum()\n", " nb1_sim[i] = nb1_here\n", " \n", "print np.percentile(nb1_sim, 99.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 3\n", "\n", "We need to adjust the answer to Part 2 to allow for the fact that we don't know the value of $p_{beef}$ exactly. In other words, we need to marginalize over all possible values of $p_{beef}$, weighted by the posterior probability distribution found in Part 1.\n", "\n", "The quickest way to achieve this is, again, by simulation:\n", "\n", "- For a large number of iterations, draw a value of $p_beef$ from the posterior distribution (a Beta distribution).\n", "- For each value of $p_{beef}$, draw a value of $n_{b1}$ for a 219-seat flight.\n", "- Find the 99%-ile of the resulting distribution in $n_{b1}$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_sim = 10000\n", "\n", "nb1_sim = np.zeros(n_sim)\n", "\n", "# draws n_sim values of p_beef\n", "p_beef_sim = np.random.beta(n_b+1, n_c+1, n_sim)\n", "for i in range(n_sim):\n", " x = np.random.rand(nseat)\n", " nb1_here = (x < p_beef_sim[i]).sum()\n", " nb1_sim[i] = nb1_here\n", " \n", "print np.percentile(nb1_sim, 99.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Possible variations\n", "\n", "The answer to Part 2 and Part 3 did not change much. This is because the data we used to constrain the model was very informative. You could try to see what happens if we try to solve the same problem by reducing the numbers $n_b$ and $n_c$ of the survey. What happens if we use data from a single flight? How does the answer depend on the prior in that case?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 2 }