2024/18 LEM Working Paper Series

Generalized Optimization Algorithms for Complex Objectives Functions

Mario Martinoli, Raffaello Seri and Fulvio Corsi
  Keywords
 
Optimization, approximation algorithms, stochastic approximation, methods of quasi-Newton type


  JEL Classifications
 
C61; C15; C44
  Abstract
 
The paper pursues two goals. First, by linking the statistics and machine learning literatures, we provide new general results on the convergence of stochastic approximation schemes and inexact Newton methods. Second, by building on these results, we put forward a new optimization scheme that we call local approximation inexact Newton method (LAINM). For each iteration of the optimization algorithm, the method proceeds through these steps: the objective function is computed or approximated in a neighborhood of the iterate; the value of the function, of its gradient and of its Hessian are approximated through a polynomial regression; these quantities are then used in a Newton-like iteration. We extensively discuss the theoretical and the computational aspects of LAINM. The results apply to both deterministic and stochastic approximation schemes, and are particular effective in the case in which the objective function to be optimized is highly irregular and/or the stochastic equicontinuity hypothesis, that is generally assumed in simulation-based estimation, is violated. Examples are common in dynamic discrete choice models (DDCM) and complex simulation models characterized by nonlinearities and high levels of heterogeneity. The theory is supported by extensive Monte Carlo experiments and an application to a binary response model with serially correlated errors.
  Downloads
 
download pdf


Back