{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 4, Exercise 1: Metropolis algorithm\n", "\n", "Let $P(\\theta) \\propto \\dfrac{1}{\\theta^5}\\exp{\\left\\{-\\dfrac{1}{\\theta^2}\\right\\}}$ be an un-normalized probability distribution. \n", "1. Draw a sample from $P(\\theta)$ with a Markov Chain Monte Carlo, using the Metropolis algorithm.\n", "2. Plot the chain and inspect it.\n", "3. Measure the acceptance rate. Try varying the step size of the jump rule and see how the acceptance rate and the behavior of the chain change.\n", "4. After discarding the burn-in phase (if present), make a histogram of the sample." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solution\n", "\n", "In order to use the Metropolis algorithm, we need to define:\n", "1. A jump rule $J_t(\\theta^*|\\theta^{t-1})$\n", "2. A starting point\n", "\n", "Let's use a Gaussian jump rule with step size $\\sigma_\\theta = 0.3$:\n", "\n", "$J_t(\\theta^*|\\theta^{t-1}) = \\dfrac{1}{\\sqrt{2\\pi}\\sigma_\\theta}\\exp{\\left\\{-\\dfrac{(\\theta^* - \\theta^{t-1})^2}{2\\sigma_\\theta^2}\\right\\}}$\n", "\n", "Let's draw a random number between 0 and 1 as the starting point." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pylab\n", "\n", "# define the probability distribution\n", "def logpfunc(theta):\n", " return -5*np.log(theta) - 1./theta**2\n", "\n", "sigma_theta = 0.3 # step size of the jump rule\n", "\n", "theta_0 = np.random.rand(1) # starting point of the chain\n", "\n", "nsamp = 50000 # number of steps in the chain\n", "\n", "theta_samp = np.zeros(nsamp)\n", "theta_samp[0] = theta_0\n", "\n", "logp_samp = np.zeros(nsamp)\n", "logp_samp[0] = logpfunc(theta_0)\n", "\n", "naccept = 0\n", "for i in range(1, nsamp):\n", " # propose a new point\n", " theta_new = np.random.normal(theta_samp[i-1], sigma_theta, 1)\n", " \n", " # calculate the log-likelihood at the proposed position\n", " logp_new = logpfunc(theta_new)\n", " \n", " accept = False\n", " if logp_new > logp_samp[i-1]:\n", " accept = True\n", " else:\n", " x = np.random.rand(1)\n", " if x < np.exp(logp_new - logp_samp[i-1]):\n", " accept = True\n", " else:\n", " accept = False\n", " \n", " if accept:\n", " theta_samp[i] = theta_new\n", " logp_samp[i] = logp_new\n", " naccept += 1\n", " else:\n", " theta_samp[i] = theta_samp[i-1]\n", " logp_samp[i] = logp_samp[i-1]\n", "\n", "print 'acceptance rate: %3.2f'%(naccept/float(nsamp))\n", "pylab.plot(theta_samp)\n", "pylab.show()\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nbins = 30\n", "bins = np.linspace(0., 3., nbins+1)\n", "pylab.hist(theta_samp, bins=bins)\n", "pylab.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 2 }