<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>EconStor Collection:</title>
  <link rel="alternate" href="https://hdl.handle.net/10419/64637" />
  <subtitle />
  <id>https://hdl.handle.net/10419/64637</id>
  <updated>2026-04-30T11:33:44Z</updated>
  <dc:date>2026-04-30T11:33:44Z</dc:date>
  <entry>
    <title>A Neyman-orthogonalization approach to the incidental parameter problem</title>
    <link rel="alternate" href="https://hdl.handle.net/10419/309994" />
    <author>
      <name>Bonhomme, Stéphane</name>
    </author>
    <author>
      <name>Jochmans, Koen</name>
    </author>
    <author>
      <name>Weidner, Martin</name>
    </author>
    <id>https://hdl.handle.net/10419/309994</id>
    <updated>2025-02-08T02:14:57Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: A Neyman-orthogonalization approach to the incidental parameter problem
Authors: Bonhomme, Stéphane; Jochmans, Koen; Weidner, Martin
Abstract: A popular approach to perform inference on a target parameter in the presence of nuisance parameters is to construct estimating equations that are orthogonal to the nuisance parameters, in the sense that their expected first derivative is zero. Such first-order orthogonalization may, however, not suffice when the nuisance parameters are very imprecisely estimated. Leading examples where this is the case are models for panel and network data that feature fixed effects. In this paper, we show how, in the conditional-likelihood setting, estimating equations can be constructed that are orthogonal to any chosen order. Combining these equations with sample splitting yields higher-order bias-corrected estimators of target parameters. In an empirical application we apply our method to a fixed-effect model of team production and obtain estimates of complementarity in production and impacts of counterfactual re-allocations.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Prediction sets and conformal inference with censored outcomes</title>
    <link rel="alternate" href="https://hdl.handle.net/10419/309993" />
    <author>
      <name>Liu, Weiguang</name>
    </author>
    <author>
      <name>de Paula, Áureo</name>
    </author>
    <author>
      <name>Tamer, Elie T.</name>
    </author>
    <id>https://hdl.handle.net/10419/309993</id>
    <updated>2025-02-08T02:06:54Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Prediction sets and conformal inference with censored outcomes
Authors: Liu, Weiguang; de Paula, Áureo; Tamer, Elie T.
Abstract: Given data on a scalar random variable 𝑌, a prediction set for 𝑌 with miscoverage level 𝛼 is a set of values for 𝑌 that contains a randomly drawn 𝑌 with probability 1 - 𝛼, where 𝛼 ∈ (0, 1). Among all prediction sets that satisfy this coverage property, the oracle prediction set is the one with the smallest volume. This paper provides estimation methods of such prediction sets given observed conditioning covariates when 𝑌 is censored or measured in intervals. We first characterise the oracle prediction set under interval censoring and develop a consistent estimator for the shortest prediction interval that satisfies this coverage property. We then extend these consistency results to accommodate cases where the prediction set consists of multiple disjoint intervals. Second, we use conformal inference to construct a prediction set that achieves a particular notion of finite-sample validity under censoring and maintains consistency as sample size increases. This notion exploits exchangeability to obtain finite sample guarantees on coverage using a specially constructed conformity score function. The procedure accomodates the prediction uncertainty that is irreducible (due to the stochastic nature of outcomes), the modelling uncertainty due to partial identification and also sampling uncertainty that gets reduced as samples get larger. We conduct a set of Monte Carlo simulations and an application to data from the Current Population Survey. The results highlight the robustness and efficiency of the proposed methods.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Identification of treatment effects under limited exogenous variation</title>
    <link rel="alternate" href="https://hdl.handle.net/10419/309995" />
    <author>
      <name>Newey, Whitney K.</name>
    </author>
    <author>
      <name>Stouli, Sami</name>
    </author>
    <id>https://hdl.handle.net/10419/309995</id>
    <updated>2025-02-08T02:23:28Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Identification of treatment effects under limited exogenous variation
Authors: Newey, Whitney K.; Stouli, Sami
Abstract: Multidimensional heterogeneity and endogeneity are important features of a wide class of econometric models. With control variables to correct for endogeneity, nonparametric identification of treatment effects requires strong support conditions. To alleviate this requirement, we consider varying coefficients specifications for the conditional expectation function of the outcome given a treatment and control variables. This function is expressed as a linear combination of either known functions of the treatment, with unknown coefficients varying with the controls, or known functions of the controls, with unknown coefficients varying with the treatment. We use this modeling approach to give necessary and sufficient conditions for identification of average treatment effects. A sufficient condition for identification is conditional nonsingularity, that the second moment matrix of the known functions given the variable in the varying coefficients is nonsingular with probability one. For known treatment functions with sufficient variation, we find that triangular models with discrete instrument cannot identify average treatment effects when the number of support points for the instrument is less than the number of coefficients. For known functions of the controls, we find that average treatment effects can be identified in general nonseparable triangular models with binary or discrete instruments. We extend our analysis to flexible models of increasing dimension and relate conditional nonsingularity to the full support condition of Imbens and Newey (2009), thereby embedding semi- and non-parametric identification into a common framework.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Simple estimation of semiparametric models with measurement errors</title>
    <link rel="alternate" href="https://hdl.handle.net/10419/309989" />
    <author>
      <name>Evdokimov, Kirill S.</name>
    </author>
    <author>
      <name>Zeleneev, Andrei</name>
    </author>
    <id>https://hdl.handle.net/10419/309989</id>
    <updated>2025-02-08T02:08:11Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Simple estimation of semiparametric models with measurement errors
Authors: Evdokimov, Kirill S.; Zeleneev, Andrei
Abstract: We develop a practical way of addressing the Errors-In-Variables (EIV) problem in the Generalized Method of Moments (GMM) framework. We focus on the settings in which the variability of the EIV is a fraction of that of the mismeasured variables, which is typical for empirical applications. For any initial set of moment conditions our approach provides a "corrected" set of moment conditions that are robust to the EIV. We show that the GMM estimator based on these moments is Í n-consistent, with the standard tests and confidence intervals providing valid inference. This is true even when the EIV are so large that naive estimators (that ignore the EIV problem) are heavily biased with their confidence intervals having 0% coverage. Our approach involves no nonparametric estimation, which is especially important for applications with many covariates, and settings with multivariate or non-classical EIV. In particular, the approach makes it easy to use instrumental variables to address EIV in nonlinear models.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

