Home > wafo > wstats > loglike.m

# loglike

## PURPOSE

Log-likelihood function.

## SYNOPSIS

[LL,C]=loglike(phat,varargin)

## DESCRIPTION

LOGLIKE Log-likelihood function.

CALL:  [L,cov] = loglike(phat,x1,x2,...,xN,dist);

L    = -log(f(phat|data)), i.e., the log-likelihood function
with parameters phat given the data.
cov  = Asymptotic covariance matrix of phat (if phat is estimated by
a maximum likelihood method).
phat = [A1,A2,...,Am], vector of distribution parameters.
x1,x2,..xN = vectors of data points.
dist = string containing the name of the PDF.

LOGLIKE is a utility function for maximum likelihood estimation.
This works on any PDF having the following calling syntax:

f = testpdf(x1,x2,..,xN,A1,A2,..,Am)

where x1,x2,...,xN contain the points to be evaluated and A1,A2... are
the distribution parameters.

Example: MLE and asymptotic covariance of phat:
R = wweibrnd(1,3,100,1);
phat0 = [1.3 2.5];                             %initial guess
phat = fminsearch('loglike',phat0,[],R,'wweibpdf')
[L, cov] = loglike(phat,R,'wweibpdf')
[phat2 cov2] = wweibfit(R)                     % compare with wweibfit

## CROSS-REFERENCE INFORMATION

This function calls:
 error Display message and abort function. ischar True for character array (string). kron Kronecker tensor product. num2cell Convert numeric array into cell array. qr Orthogonal-triangular decomposition.
This function is called by:
 braylfit Parameter estimates for Beta-Rayleigh data. ochi98fit Parameter estimates and confidence intervals for Ochi data. wbetafit Parameter estimates for Beta data. wchi2fit Parameter estimates for Chi squared data. wgamfit Parameter estimates for Gamma data. wtfit Parameter estimates for Student's T data. wtraylfit Parameter estimates for Truncated Rayleigh data. wtweibfit Parameter estimates for truncated Weibull data.

## SOURCE CODE

001 function [LL,C]=loglike(phat,varargin)
002 %LOGLIKE Log-likelihood function.
003 %
004 % CALL:  [L,cov] = loglike(phat,x1,x2,...,xN,dist);
005 %
006 %       L    = -log(f(phat|data)), i.e., the log-likelihood function
007 %              with parameters phat given the data.
008 %       cov  = Asymptotic covariance matrix of phat (if phat is estimated by
009 %              a maximum likelihood method).
010 %       phat = [A1,A2,...,Am], vector of distribution parameters.
011 % x1,x2,..xN = vectors of data points.
012 %       dist = string containing the name of the PDF.
013 %
014 %  LOGLIKE is a utility function for maximum likelihood estimation.
015 %  This works on any PDF having the following calling syntax:
016 %
017 %       f = testpdf(x1,x2,..,xN,A1,A2,..,Am)
018 %
019 % where x1,x2,...,xN contain the points to be evaluated and A1,A2... are
020 % the distribution parameters.
021 %
022 % Example: MLE and asymptotic covariance of phat:
023 %   R = wweibrnd(1,3,100,1);
024 %   phat0 = [1.3 2.5];                             %initial guess
025 %   phat = fminsearch('loglike',phat0,[],R,'wweibpdf')
026 %   [L, cov] = loglike(phat,R,'wweibpdf')
027 %   [phat2 cov2] = wweibfit(R)                     % compare with wweibfit
028
029
030 %Tested on: matlab 5.3
031 % History:
032 % by pab 31.10.2000
033
034
035 error(nargchk(3,inf,nargin))
036
037 params = num2cell(phat(:).',1);
038 data1  = varargin(1:end-1); % cell array of vectors with data points
039 dist   = varargin{end};
040 if ~ischar(dist),error('Distribution is unspecified'),end
041 for ix=1:length(data1),
042   data1{ix}  = data1{ix}(:); %% make sure it is a vector.
043 end
044
045
046 x = feval(dist,data1{:},params{:})+eps;
047 %x = x(:);  % make sure it is a vector.
048 LL = - sum(log(x)); % log likelihood function
049
050 if nargout == 2
051   Nd     = length(x);
052   np     = length(params);
053   delta  = eps^.4;
054   delta2 = delta^2;
055
056   switch  2,
057   case 1,
058     % Approximate 1/(nE( (d L(x|theta)/dtheta)^2)) with
059     %             1/(d L(theta|x)/dtheta)^2
060     % This is a bad approximation especially for the off diagonal elements
061     xp     = zeros(Nd,np);
062     sparam = params;
063     for ix = 1:np,
064       sparam{ix}= params{ix}+delta;
065       xp(:,ix) = feval(dist,data1{:},sparam{:})+eps;
066       sparam{ix}= params{ix};
067     end
068     J = (log(xp) - repmat(log(x),1,np))./delta;
069     [Q,R]= qr(J,0);
070     Rinv = R\eye(np);
071     C = Rinv'*Rinv;
072   case 2,
073     % Approximate 1/(nE( (d L(x|theta)/dtheta)^2)) with
074     %             1/(d^2 L(theta|x)/dtheta^2)
075     %  using central differences
076     % This is usually a much better estimate than case 1 and slightly
077     % faster than case 3.
078     H = zeros(np);             % Hessian matrix
079     for ix=1:np,
080       sparam = params;
081       sparam{ix}= params{ix}+delta;
082       x  = feval(dist,data1{:},sparam{:})+eps;
083       fp = sum(log(x));
084       sparam{ix} = params{ix}-delta;
085       x  = feval(dist,data1{:},sparam{:})+eps;
086       fm = sum(log(x));
087       H(ix,ix) = (fp+2*LL+fm)/delta2;
088       for iy=ix+1:np,
089     sparam{ix} = params{ix}+delta;
090     sparam{iy} = params{iy}+delta;
091     x   = feval(dist,data1{:},sparam{:})+eps;
092     fpp = sum(log(x));
093     sparam{iy} = params{iy}-delta;
094     x   = feval(dist,data1{:},sparam{:})+eps;
095     fpm = sum(log(x));
096     sparam{ix} = params{ix}-delta;
097     x   = feval(dist,data1{:},sparam{:})+eps;
098     fmm = sum(log(x));
099     sparam{iy} = params{iy}+delta;
100     x   = feval(dist,data1{:},sparam{:})+eps;
101     fmp = sum(log(x));
102     H(ix,iy) = (fpp-fmp-fpm+fmm)/(4*delta2);
103     H(iy,ix) = H(ix,iy);
104       end
105     end
106     % invert the Hessian matrix (i.e. invert the observed information number)
107     C = -H\eye(np);
108   case 3,
109     % Approximate 1/(nE( (d L(x|theta)/dtheta)^2)) with
110     %             1/(d^2 L(theta|x)/dtheta^2)
111     % using differentiation matrices
112     % This is the same as case 2 when N=3
113
114     if 1, %
115       xn =[     1;     0;    -1];
116       D(:,:,1) =[...
117         1.5000   -2.0000    0.5000;...
118         0.5000    0.0000   -0.5000;...
119         -0.5000    2.0000   -1.5000];
120       D(:,:,2) =[...
121         1.0000   -2.0000    1.0000;...
122         1.0000   -2.0000    1.0000;...
123         1.0000   -2.0000    1.0000];
124     else % If you have the differentiation matrix suite toolbox you may
125       %use this
126       % By increasing N better accuracy might be expected
127       %N=3; % NB!: N must be odd
128       %[xn D]=chebdif(N,2);  % construct differentiation matrix
129     end
130     N=length(xn);
131     xn=xn.';
132     % Construct differentiation matrices
133     D11 = kron(D(:,:,1),D(:,:,1));
134     %D20 = kron(D(:,:,2),eye(N));
135     %D02 = kron(eye(N),D(:,:,2));
136     H   = zeros(np);           % Hessian matrix
137     LL2 = zeros(N,N);          % Log likelihood evaluated at phat and in
138                                % the vicinity of phat
139
140     N2 = (N+1)/2;              % The middle indices
141     LL2(N2,N2) = -LL; % = sum(log(x))
142     for ix=1:np,
143       sparam = params;
144       for iy = [1:N2-1 N2+1:N];
145     sparam{ix}= params{ix}+xn(iy)*delta;
146     x = feval(dist,data1{:},sparam{:})+eps;
147     %sparam{ix} = params{ix}+xn(ones(Nd,1),iy)*delta;
148     %x = feval(dist,repmat(data1,[1 length(iy)]),sparam{:})+eps;
149     LL2(iy,N2) = sum(log(x)).';
150       end
151       %sparam=params;
152       H(ix,ix) = (D(N2,:,2)*LL2(:,N2))/delta2;
153       for iy=ix+1:np,
154     for iz=[1:N],
155       sparam{ix} = params{ix}+xn(iz)*delta;
156       for iw=[1:N2-1 N2+1:N];
157         sparam{iy}= params{iy}+xn(iw)*delta;
158         x = feval(dist,data1{:},sparam{:})+eps;
159         %sparam{iy} = params{iy}+xn(ones(Nd,1),iw)*delta;
160         %x = feval(dist,repmat(data1,[1,length(iw)]),sparam{:})+eps;
161         LL2(iz,iw) = sum(log(x));
162       end
163     end
164     H(ix,iy) = D11((N^2+1)/2,:)*LL2(:)/delta2;
165     H(iy,ix) = H(ix,iy);
166       end
167     end
168     % invert the Hessian matrix (i.e. invert the observed information number)
169     C = -H\eye(np);
170   end
171 end
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189

Mathematical Statistics
Centre for Mathematical Sciences
Lund University with Lund Institute of Technology

Comments or corrections to the WAFO group

Generated on Thu 06-Oct-2005 02:21:16 for WAFO by m2html © 2003