<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>EconStor Collection:</title>
    <link>https://hdl.handle.net/10419/96527</link>
    <description />
    <pubDate>Thu, 30 Apr 2026 04:39:35 GMT</pubDate>
    <dc:date>2026-04-30T04:39:35Z</dc:date>
    <item>
      <title>Is your machine better than you? You may never know</title>
      <link>https://hdl.handle.net/10419/267687</link>
      <description>Title: Is your machine better than you? You may never know
Authors: de Véricourt, Francis; Gurkan, Huseyin
Abstract: Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet, recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine-based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stake decisions can properly assess whether the machine produces better recommendations. To that end, we study a set-up in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM's supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine's prescriptions across tasks, she updates her belief about the machine. However, the DM is subject to a so-called verification bias such that the DM verifies the machine's correctness and updates her belief accordingly only if she ultimately decides to act on the task. In this set-up, we characterize the evolution of the DM's belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better, i.e., she never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. These findings provide a novel explanation for human-machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.</description>
      <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/267687</guid>
      <dc:date>2022-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Mismanaging diagnostic accuracy under congestion</title>
      <link>https://hdl.handle.net/10419/251899</link>
      <description>Title: Mismanaging diagnostic accuracy under congestion
Authors: Kremer, Mirko; de Véricourt, Francis
Abstract: To study the effect of congestion on the fundamental trade-off between diagnostic accuracy and speed, we empirically test the predictions of a formal sequential testing model in a setting where the gathering of additional information can improve diagnostic accuracy, but may also take time and increase congestion as a result. The efficient management of such systems requires a careful balance of congestion-sensitive stopping rules. These include diagnoses made based on very little or no diagnostic information, and the stopping of diagnostic processes while waiting for information. We test these rules under controlled laboratory conditions, and link the observed biases to system dynamics and performance. Our data shows that decision makers (DMs) stop diagnostic processes too quickly at low congestion levels where information acquisition is relatively cheap. But they fail to stop quickly enough when increasing congestion requires the DM to diagnose without testing, or diagnose while waiting for test results. Essentially, DMs are insufficiently sensitive to congestion. As a result of these behavioral patterns, DMs manage the system with both lower-than-optimal diagnostic accuracy and higher-than-optimal congestion cost, underperforming on both sides of the accuracy/speed trade-off.</description>
      <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/251899</guid>
      <dc:date>2022-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Decertification in quality-management standards by incrementally and radically innovative organizations</title>
      <link>https://hdl.handle.net/10419/261292</link>
      <description>Title: Decertification in quality-management standards by incrementally and radically innovative organizations
Authors: Clougherty, Joseph A.; Grajek, Michał
Abstract: The literature on quality-management standards has generally focused on the drivers, motivations, and performance effects of adopting such standards. Yet the last decade has witnessed a substantial degree of decertification behavior, as organizations have increasingly decided to voluntarily withdraw from qualitymanagement standards by not recertifying. While the drivers of the decision to initially adopt qualitymanagement standards have been extensively studied, the drivers of the decision to decertify have received scant scholarly attention. We argue that innovative organizations are generally prone to retaining qualitymanagement certification and thus exhibit a tendency to not abandon certification; however, radicallyinnovative organizations are more prone than incrementally-innovative organizations to discontinue quality-management standards and thereby exhibit a tendency to withdraw from quality certification. We compile World Bank data surveying facilities based in 50 countries and 103 industrial sectors across the 2003 to 2017 period. Taking advantage of the data's panel properties yields a dataset composed of up to 1,755 facility-level observations of recertification decisions for empirical analysis. Our empirical testing employs a probit estimation technique that accounts for the appropriate fixed effects and generates results that support our theoretical priors regarding decertification behavior.</description>
      <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/261292</guid>
      <dc:date>2022-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Human and machine: The impact of machine input on decision-making under cognitive limitations</title>
      <link>https://hdl.handle.net/10419/266637</link>
      <description>Title: Human and machine: The impact of machine input on decision-making under cognitive limitations
Authors: Boyacı, Tamer; Canyakmaz, Caner; de Véricourt, Francis
Abstract: The rapid adoption of AI technologies by many organizations has recently raised concerns that AI may eventually replace humans in certain tasks. In fact, when used in collaboration, machines can significantly enhance the complementary strengths of humans. Indeed, because of their immense computing power, machines can perform specific tasks with incredible accuracy. In contrast, human decision-makers (DM) are flexible and adaptive but constrained by their limited cognitive capacity. This paper investigates how machine-based predictions may affect the decision process and outcomes of a human DM. We study the impact of these predictions on decision accuracy, the propensity and nature of decision errors as well as the DM's cognitive efforts. To account for both flexibility and limited cognitive capacity, we model the human decision-making process in a rational inattention framework. In this setup, the machine provides the DM with accurate but sometimes incomplete information at no cognitive cost. We fully characterize the impact of machine input on the human decision process in this framework. We show that machine input always improves the overall accuracy of human decisions, but may nonetheless increase the propensity of certain types of errors (such as false positives). The machine can also induce the human to exert more cognitive efforts, even though its input is highly accurate. Interestingly, this happens when the DM is most cognitively constrained, for instance, because of time pressure or multitasking. Synthesizing these results, we pinpoint the decision environments in which human-machine collaboration is likely to be most beneficial. Our main insights hold for different information and reward structures, and when the DM mistrust the machine.</description>
      <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/266637</guid>
      <dc:date>2022-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

