------------------------------------------------------------------------------------------------------ log: c:\Imbook\bwebpage\Section2\mma05p3nlsbyml.txt log type: text opened on: 17 May 2005, 13:54:20 . . ********** OVERVIEW OF MMA05P2NLSBYML.DO ********** . . * STATA Program . * copyright C 2005 by A. Colin Cameron and Pravin K. Trivedi . * used for "Microeconometrics: Methods and Applications" . * by A. Colin Cameron and Pravin K. Trivedi (2005) . * Cambridge University Press . . * Chapter 5.9 pp.159-63 . * Nonlinear Least Squares using Stata command ml . . * Provides third column of Table 5.7 for . * (1) NLS using Stata ml command (easy to get robust s.e.'s) . * using generated data set mma05data.asc . . * Note: Use ml rather than nl as then much easier to get robust s.e.'s . * Can instead use stata command nl see program mma05p2nlsbynl.do . . * Related programs: . * mma05p1mle.do OLS and MLE for the same data . * mma05p2nls.do NLS (and WMNLS and FGNLS) using Stata command nl . * mma05p4margeffects.do Calculates marginal effects . . * To run this program you need data and dictionary files . * mma05data.asc ASCII data set generated by mma05p1mle.do . . ********** SETUP ********** . . set more off . version 8 . . ********** READ IN DATA and SUMMARIZE ********** . . * Model is y ~ exponential(exp(a + bx)) . * x ~ N[mux, sigx^2] . * f(y) = exp(a + bx)*exp(-y*exp(a + bx)) . * lnf(y) = (a + bx) - y*exp(a + bx) . * E[y] = exp(-(a + bx)) note sign reversal for the mean . * V[y] = exp(-(a + bx)) = E[y]^2 . * Here a = 2, b = -1 and x ~ N[mux=1, sigx^21] . * and Table 5.7 uses N=10,000 . . * Data was generated by program mma05p1mle.do . infile y x using mma05data.asc (10000 observations read) . . * Descriptive Statistics . describe Contains data obs: 10,000 vars: 2 size: 120,000 (98.8% of memory free) ------------------------------------------------------------------------------- storage display value variable name type format label variable label ------------------------------------------------------------------------------- y float %9.0g x float %9.0g ------------------------------------------------------------------------------- Sorted by: Note: dataset has changed since last saved . summarize Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- y | 10000 .6194352 1.291416 .0000445 30.60636 x | 10000 1.014313 1.004905 -2.895741 4.994059 . . ********** DO THE ANALYSIS: NLS using STATA COMMAND ML ********** . . * (1) NLS ESTIMATION USING STATA ML COMMAND (maximum likelihood) . . * Advantage: ml command has robust standard errors as an option . . * The NLS estimator minimizes SUM_i (y_i - g(x_i'b))^2. . * Here let g(x'b) = exp(a + b*x) = exp(b1int + b2x*x) say. . * In fact for this dgp E[y] = exp(-(a + bx)) so sign reversal for the mean. . . * To adjust this code to other NLS problems . * (a) If more regressors, say x1 x2 and x3, replace ml model line with . * ml model lf mlexp (y = x1 x2 x3) / sigma . * (b) If different functional form for mean, say g(x'b), redefine `res' as . * `res' = \$ML_y1 - g(`theta') . * (c) If functional form for mean is not single-index then the program . * will become considerably more complicated with more args. . . * (1A) The program "mlexp" defines the objective function . program define mlexp 1. version 8.0 2. args lnf theta sigma /* theta contains b1int and b2x; sigma is st.dev.of error */ 3. tempvar res /* create to shorten expression for lnf */ 4. quietly gen double `res' = \$ML_y1 - exp(-`theta') 5. quietly replace `lnf' = -0.5*ln(2*_pi) - ln(`sigma') - 0.5*`res'^2/`sigma'^2 6. end . . * (1B) The following command gives the dep variable (y) and regressors (x + intercept) . ml model lf mlexp (y = x) / sigma . ml search initial: log likelihood = - (could not be evaluated) feasible: log likelihood = -35613.002 improve: log likelihood = -19164.648 rescale: log likelihood = -16938.923 rescale eq: log likelihood = -16938.923 . ml maximize initial: log likelihood = -16938.923 rescale: log likelihood = -16938.923 rescale eq: log likelihood = -16938.923 Iteration 0: log likelihood = -16938.923 (not concave) Iteration 1: log likelihood = -15504.033 Iteration 2: log likelihood = -14673.535 Iteration 3: log likelihood = -14272.637 Iteration 4: log likelihood = -14263.775 Iteration 5: log likelihood = -14263.761 Iteration 6: log likelihood = -14263.761 Number of obs = 10000 Wald chi2(1) = 10492.88 Log likelihood = -14263.761 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ y | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- eq1 | x | -.9574683 .0093471 -102.43 0.000 -.9757883 -.9391483 _cons | 1.887562 .0295701 63.83 0.000 1.829606 1.945519 -------------+---------------------------------------------------------------- sigma | _cons | 1.007465 .0071239 141.42 0.000 .9935028 1.021428 ------------------------------------------------------------------------------ . estimates store bnlsbymle . . * (1C) Adding ,robust gives Heteroskedastic robust standard errors . ml model lf mlexp (y = x) / sigma, robust . ml search initial: log pseudo-likelihood = - (could not be evaluated) feasible: log pseudo-likelihood = -35613.002 improve: log pseudo-likelihood = -17310.807 rescale: log pseudo-likelihood = -17310.807 rescale eq: log pseudo-likelihood = -16777.282 . ml maximize initial: log pseudo-likelihood = -16777.282 rescale: log pseudo-likelihood = -16777.282 rescale eq: log pseudo-likelihood = -16777.282 Iteration 0: log pseudo-likelihood = -16777.282 (not concave) Iteration 1: log pseudo-likelihood = -16097.359 Iteration 2: log pseudo-likelihood = -16013.711 Iteration 3: log pseudo-likelihood = -14412.885 Iteration 4: log pseudo-likelihood = -14264.159 Iteration 5: log pseudo-likelihood = -14263.761 Iteration 6: log pseudo-likelihood = -14263.761 Number of obs = 10000 Wald chi2(1) = 288.75 Log pseudo-likelihood = -14263.761 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ | Robust y | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- eq1 | x | -.9574683 .0563463 -16.99 0.000 -1.067905 -.8470317 _cons | 1.887562 .127832 14.77 0.000 1.637016 2.138108 -------------+---------------------------------------------------------------- sigma | _cons | 1.007465 .0561714 17.94 0.000 .8973713 1.117559 ------------------------------------------------------------------------------ . estimates store bnlsbymlerobust . . ***** PRINT RESULTS: Third column of Table 5.7 p.111 ********** . . * (1) NLS by ML - nonrobust and robust standard errors . * The coefficient estimates are exactly the same as those using the nl command . * The estimated standard errors are close - within 10% of those using the nl command . * Table 5.7 reports the standard errors using the nl command . estimates table bnlsbymle bnlsbymlerobust, b(%10.4f) se(%10.4f) t stats(N ll) ---------------------------------------- Variable | bnlsbymle bnlsbyml~t -------------+-------------------------- eq1 | x | -0.9575 -0.9575 | 0.0093 0.0563 | -102.43 -16.99 _cons | 1.8876 1.8876 | 0.0296 0.1278 | 63.83 14.77 -------------+-------------------------- sigma | _cons | 1.0075 1.0075 | 0.0071 0.0562 | 141.42 17.94 -------------+-------------------------- Statistics | N | 10000.0000 10000.0000 ll | -1.426e+04 -1.426e+04 ---------------------------------------- legend: b/se/t . . ********** CLOSE OUTPUT ********** . log close log: c:\Imbook\bwebpage\Section2\mma05p3nlsbyml.txt log type: text closed on: 17 May 2005, 13:54:27 ----------------------------------------------------------------------------------------------------