Example of Maximum Likelihood computation with R. First we need to define the log likelihood function (or negative of it). This function needs two inputs: parameter value and data, but is only considered as the function for the parameter. (data is given and fixed). Take the exponential distribution give in textbook. The parameter is beta. The R function we defined in class was myNEGloglikexp <- function( beta, obs ) { - sum( log( (1/beta)*exp( - obs/beta) ) ) } A slightly simplified version: myNEGloglikexp <- function( beta, obs) { - sum( log( 1/beta) - obs/beta ) } Next we generae some random data: mydata <- rexp( 100 ) Now we are ready to find the minimum of the function myNEGloglikexp: nlm( f=myNEGloglikexp, p=1, hessian=TRUE, obs=mydata ) Sometimes you need to try different initial value ( p= ) to get rid of Warning messages. This is the initial value in the search of minumim. A good initial value is essential. In the output, the $estimate will be our MLE. the $gradient should be close to zero (as the derivative at a min must be) the $hessian is the (negative of) second derivative at the MLE so it is just the observed Fisher information. This should be positive, otherwise, you will be think twice if the estimate find is a true min. According to the approximate distribution theory of MLE we can construct an approx. 95% confidence interval for beta as $estimate +- 1.96 sqrt( 1/$hessian )