Empirical Methods for Microeconomic Applications University of Lugano, Switzerland May 27-31, 2019 William Greene Department of Economics Stern School of Business New York University 1A. Descriptive Tools, Regression, Panel Data Agenda
Day 1 A. Descriptive Tools, Regression, Models, Panel Data, Nonlinear Models B. Binary choice and nonlinear modeling, panel data
C. Ordered Choice, endogeneity, control functions, Robust inference, bootstrapping Day 2 A. Models for count data, censoring, inflation models B. Latent class, mixed models C. Multinomial Choice Day 3
A. Stated Preference Agenda for 1A Models and Parameterization Descriptive Statistics Regression
Functional Form Partial Effects Hypothesis Tests
Robust Estimation Bootstrapping Panel Data Nonlinear Models Cornwell and Rupert Panel Data Cornwell and Rupert Returns to Schooling Data, 595 Individuals, 7 Years Variables in the file are EXP = work experience WKS
= weeks worked OCC = occupation, 1 if blue collar, IND = 1 if manufacturing industry SOUTH = 1 if resides in south SMSA = 1 if resides in a city (SMSA) MS = 1 if married FEM = 1 if female UNION = 1 if wage set by union contract ED = years of education
BLK = 1 if individual is black LWAGE = log of wage = dependent variable in regressions These data were analyzed in Cornwell, C. and Rupert, P., "Efficient Estimation with Panel Data: An Empirical Comparison of Instrumental Variable Estimators," Journal of Applied Econometrics, 3, 1988, pp. 149-155. Model Building in Econometrics Parameterizing the model
Nonparametric analysis Semiparametric analysis Parametric analysis Sharpness of inferences follows from the strength of the
assumptions A Model Relating (Log)Wage to Gender and Experience Application: Is there a relationship between Log(wage) and Education? Semiparametric Regression: Least absolute deviations regression of y on x Nonparametric Regression
Kernel regression of y on x Parametric Regression: Least squares maximum likelihood regression of y on x A First Look at the Data Descriptive Statistics Basic Measures of Location and
Dispersion Graphical Devices Box Plots Histogram Kernel Density Estimator Box Plots
From Jones and Schurer (2011) Histogram for LWAGE The kernel density estimator is a histogram (of sorts). 1 n 1 xi xm* * f(x m ) i1 K , for a set of points x m
n B B * B "bandwidth" chosen by the analyst K the kernel function, such as the normal or logistic pdf (or one of several others) x* the point at which the density is approximated. This is essentially a histogram with small bins. Kernel Density Estimator The curse of dimensionality
1 n 1 xi xm* * f(x m ) i1 K , for a set of points x m n B B * B "bandwidth" K the kernel function x* the point at which the density is approximated. f(x*) is an estimator of f(x*)
1 n Q(x i | x*) Q(x*). i1 n 1 1 But, Var[Q(x*)] Something. Rather, Var[Q(x*)] 3/5 * Something N N I.e.,f(x*)
does not converge to f(x*) at the same rate as a mean converges to a population mean. Kernel Estimator for LWAGE From Jones and Schurer (2011) Objective: Impact of Education on (log) Wage
Specification: What is the right model to use to analyze this association? Estimation Inference Analysis Simple Linear Regression LWAGE = 5.8388 + 0.0652*ED
Multiple Regression Specification: Quadratic Effect of Experience Partial Effects Model Implication: Effect of Experience and Male vs. Female Hypothesis Test About Coefficients
Hypothesis Null: Restriction on : R q = 0 Alternative: Not the null Approaches
Fitting Criterion: R2 decrease under the null? Wald: Rb q close to 0 under the alternative? Hypotheses All Coefficients = 0? R = [ 0 | I ] q = [0] ED Coefficient = 0? R = 0,1,0,0,0,0,0,0,0,0,0,0 q= 0
No Experience effect? R = 0,0,1,0,0,0,0,0,0,0,0,0 0,0,0,1,0,0,0,0,0,0,0,0 q=0 0 Hypothesis Test Statistics Subscript 0 = the model under the null hypothesis Subscript 1 = the model under the alternative hypothesis 1. Based on the Fitting Criterion R 2 (R12 - R02 ) / J F=
= F[J,N - K 1] 2 (1- R1 ) / (N - K1 ) 2. Based on the Wald Distance : Note, for linear models, W = JF. 2 Chi Squared = (Rb - q) R s ( X1X1)
-1 R -1 (Rb - q) Hypothesis: All Coefficients Equal Zero All Coefficients = 0? R = [0 | I] q = [0] R12 = .42645
R02 = .00000 F = 280.7 with [11,4153] Wald = b2-12[V2-12]-1b2-12 = 3087.83355 Note that Wald = JF = 11(280.7) Hypothesis: Education Effect = 0 ED Coefficient = 0?
R = 0,1,0,0,0,0,0,0,0,0,0,0 q= 0 R12 = .42645 R02 = .36355 (not shown) F = 455.396 Wald = (.05544-0)2/(.0026)2 = 455.396 Note F = t2 and Wald = F For a single hypothesis
about 1 coefficient. Hypothesis: Experience Effect = 0 No Experience effect? R = 0,0,1,0,0,0,0,0,0,0,0,0 0,0,0,1,0,0,0,0,0,0,0,0 q= 0 0 R02 = .34101, R12 = .42645 F = 309.33 Wald = 618.601 (W* = 5.99)
Built In Test Robust Covariance Matrix The White Estimator Est.Var[b] = ( X X )-1 What does robustness mean? Robust to: Heteroscedasticty
Not robust to: 2 ( XX )-1
e x x i i i i Autocorrelation Individual heterogeneity The wrong model specification Robust inference Robust Covariance Matrix
Uncorrected Bootstrapping and Quantile Regresion Estimating the Asymptotic Variance of an Estimator
Known form of asymptotic variance: Compute from known results Unknown form, known generalities about properties: Use bootstrapping Root N consistency Sampling conditions amenable to central limit theorems
Compute by resampling mechanism within the sample. Bootstrapping Method: 1. Estimate parameters using full sample: b 2. Repeat R times: Draw n observations from the n, with replacement Estimate with b(r). 3. Estimate variance with V = (1/R)r [b(r) - b][b(r) - b]
(Some use mean of replications instead of b. Advocated (without motivation) by original designers of the method.) Application: Correlation between Age and Education Bootstrap Regression Replications namelist;x=one,y,pg$ Define X regress;lhs=g;rhs=x$ Compute and display b
proc Define procedure regress;quietly;lhs=g;rhs=x$ Regression (silent) endproc Ends procedure execute;n=20;bootstrap=b$ 20 bootstrap reps matrix;list;bootstrp $ Display replications Results of Bootstrap Procedure --------+------------------------------------------------------------Variable| Coefficient
Standard Error t-ratio P[|T|>t] Mean of X --------+------------------------------------------------------------Constant| -79.7535*** 8.67255 -9.196 .0000 Y| .03692*** .00132 28.022 .0000
9232.86 PG| -15.1224*** 1.88034 -8.042 .0000 2.31661 --------+------------------------------------------------------------Completed 20 bootstrap iterations. ---------------------------------------------------------------------Results of bootstrap estimation of model. Model has been reestimated 20 times.
Means shown below are the means of the bootstrap estimates. Coefficients shown below are the original estimates based on the full sample. bootstrap samples have 36 observations. --------+------------------------------------------------------------Variable| Coefficient Standard Error b/St.Er. P[|Z|>z] Mean of X --------+------------------------------------------------------------B001| -79.7535*** 8.35512
-9.545 .0000 -79.5329 B002| .03692*** .00133 27.773 .0000 .03682 B003| -15.1224*** 2.03503
-7.431 .0000 -14.7654 --------+------------------------------------------------------------- Bootstrap Replications Full sample result Bootstrapped sample results Quantile Regression
Q(y|x,) = x, = quantile Estimated by linear programming Q(y|x,.50) = x, .50 median regression Median regression estimated by LAD (estimates same parameters as mean regression if
symmetric conditional distribution) Why use quantile (median) regression? Semiparametric Robust to some extensions (heteroscedasticity?) Complete characterization of conditional distribution Estimated Variance for Quantile Regression
Asymptotic Theory Bootstrap an ideal application Asymptotic Theory Based Estimator of Variance of Q - REG i ui , Q( yi | xi , ) x i , Q[ui | xi , ] 0
Model : yi x Residuals: u i yi -x i 1 A 1 C A 1 N 1 11 N A = E[f u (0) xx] Estimated by i 1
1 | ui | B xi xi N B2 Bandwidth B can be Silverman's Rule of Thumb: Asymptotic Variance: 1.06 Q(ui | .75) Q(ui | .25) Min su ,
.2 N 1.349 (1- ) C = (1- ) E[xx] Estimated by XX N For =.5 and normally distributed u, this all simplifies to But, this is an ideal application for bootstrapping.
2 1 su XX . 2 = .25 = .50 = .75 OLS vs. Least Absolute Deviations ---------------------------------------------------------------------Least absolute deviations estimator...............
Residuals Sum of squares = 1537.58603 Standard error of e = 6.82594 Fit R-squared = .98284 Adjusted R-squared =
.98180 Sum of absolute deviations = 189.3973484 --------+------------------------------------------------------------Variable| Coefficient Standard Error b/St.Er. P[|Z|>z] Mean of X --------+------------------------------------------------------------|Covariance matrix based on 50 replications. Constant| -84.0258*** 16.08614
-5.223 .0000 Y| .03784*** .00271 13.952 .0000 9232.86 PG| -17.0990*** 4.37160 -3.911
.0001 2.31661 --------+------------------------------------------------------------Ordinary least squares regression ............ Residuals Sum of squares = 1472.79834 Standard error of e = 6.68059 Standard errors are based on Fit
R-squared = .98356 50 bootstrap replications Adjusted R-squared = .98256 --------+------------------------------------------------------------Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X --------+------------------------------------------------------------Constant| -79.7535***
8.67255 -9.196 .0000 Y| .03692*** .00132 28.022 .0000 9232.86 PG| -15.1224*** 1.88034
-8.042 .0000 2.31661 --------+------------------------------------------------------------- Benefits of Panel Data
Time and individual variation in behavior unobservable in cross sections or aggregate time series Observable and unobservable individual heterogeneity Rich hierarchical structures More complicated models Features that cannot be modeled with only
cross section or aggregate time series data alone Dynamics in economic behavior Application: Health Care Usage German Health Care Usage Data, 7,293 Individuals, Varying Numbers of Periods This is an unbalanced panel with 7,293 individuals. There are altogether 27,326 observations. The number of observations ranges from 1 to 7. Frequencies are: 1=1525, 2=2158, 3=825, 4=926, 5=1051, 6=1000, 7=987. Downloaded from the JAE Archive. Variables in the file include DOCTOR = 1(Number of doctor visits > 0)
HOSPITAL = 1(Number of hospital visits > 0) HSAT = health satisfaction, coded 0 (low) - 10 (high) DOCVIS = number of doctor visits in last three months HOSPVIS = number of hospital visits in last calendar year PUBLIC = insured in public health insurance = 1; otherwise = 0 ADDON = insured by add-on insurance = 1; otherswise = 0 INCOME = household nominal monthly net income in German marks / 10000. (4 observations with income=0 will sometimes be dropped)
HHKIDS = children under age 16 in the household = 1; otherwise = 0 EDUC = years of schooling AGE = age in years MARRIED = marital status Balanced and Unbalanced Panels Distinction: Balanced vs. Unbalanced
Panels A notation to help with mechanics zi,t, i = 1,,N; t = 1,,Ti The role of the assumption Mathematical and notational convenience:
N n i=1 Ti Balanced, n=NT Unbalanced:
Is the fixed Ti assumption ever necessary? Almost never. Is unbalancedness due to nonrandom attrition from an otherwise balanced An Unbalanced Panel: RWMs GSOEP Data on Health Care Nonlinear Models
Specifying the model Multinomial Choice How do the covariates relate to the outcome of interest What are the implications of the
estimated model? Unordered Choices of 210 Travelers Data on Discrete Choices Specifying the Probabilities Choice specific attributes (X) vary by choices, multiply by generic coefficients. E.g., TTME=terminal time, GC=generalized cost of travel mode Generic characteristics (Income, constants) must be interacted with
choice specific constants. Estimation by maximum likelihood; dij = 1 if person i chooses j P[choice = j| xitj, zit ,i,t] = Prob[Ui,t,j Ui,t,k ], k =1,...,J(i,t) = exp(j +'xzxitj + zj'zit ) J(i,t) j=1
N logL = i=1 exp(j +x ' itj + zj'zit ) J(i) j=1
dijlogPij Estimated MNL Model P[choice = j| xitj, zit ,i,t] = Prob[Ui,t,j Ui,t,k ], k =1,...,J(i,t) = exp(j +'xzxitj + zj'zit ) J(i,t)
j=1 exp(j +x ' itj + zj'zit )