Here are links, citations, and other information for my publications and recent working papers

Recent Publications

details
The Decline of Coordinated Effects Enforcement and How to Reverse It (with D. Sokol), 76 Florida Law Review (forthcoming 2024)
    Published At
    Forthcoming
    In Repositories
    SSRN
    ResearchGate
    Abstract

    Opposition to anticompetitive coordination once animated merger policy. But after consecutive decades of decline, evidence now suggests that coordinated effects cases are disfavored among enforcers and are rarely pursued. This change in merger enforcement is dangerous and puzzling. Coordinated effects challenges are antitrust law’s best and often only opportunity to prevent anticompetitive coordination in concentrated markets. Why are coordinated effects theories not being vigorously pursued?

    In this Article, we seek to expose the decline in coordinated effects enforcement and the threat it poses to the maintenance of competitive markets. We do so in three steps. First, we explain the special significance of coordinated effects enforcement in the broader antitrust framework. Second, we document the empirical decline in coordinated effects enforcement using multiple data sources. Third, we trace the causes of this decline to discrete changes in antitrust law and enforcement policy; we expose the flaws in these changes, and we propose specific steps to reverse them.

    Copy to Clipboard
    bluebook citation
    webpage link
    details
    Experimental Analysis of Antitrust Issues, in Elgar Encyclopedia of Behavioural and Experimental Economics (Swee-Hoon Chuah, Robert Hoffmann, & Ananta Neelim, eds., forthcoming)
      Published At
      Forthcoming
      In Repositories
      Forthcoming
      Abstract
      Controlled experiments provide unique perspectives on antitrust issues pertaining to behavior that is otherwise difficult to observe. One example is collusion, since the illegality of this conduct gives participants strong incentives to conceal it. Another example is predatory pricing, which may deter potential competitors without producing easily observable instances of exclusionary pricing, especially in the typical case of a multiproduct firm with evolving and uncertain cost conditions. This entry summarizes some of the insights that antitrust experiments have provided as well as the difficulties that have been encountered.
      Copy to Clipboard
      bluebook citation
      webpage link
      details
      Against Efforts to Simplify Antitrust, 49 Journal of Corporation Law 419 (2024)
        Published At
        JCL
        Westlaw
        Lexis
        In Repositories
        SSRN
        ResearchGate
        Abstract
        Antitrust analysis is famously complex, fact intensive, and time consuming. But should we aspire for it to be otherwise? I offer two cautionary conjectures in opposition to the search for simpler rules. First, I conjecture that efforts to convert vague antitrust standards into clear rules will rarely succeed without abandoning the underlying standards that the rules were meant to simplify. Second, I conjecture that failed efforts at simplifying antitrust law will often have the opposite effect—increasing the apparent complexity and vagueness of this law. If these conjectures are correct, then the search for simpler rules could be not just unproductive but counterproductive in antitrust law.
        Copy to Clipboard
        bluebook citation
        webpage link
        details
        Published At
        CPI
        In Repositories
        SSRN
        ResearchGate
        Abstract
        A debate is brewing between antitrust critics who claim that merger enforcement has been weak and fading since the 1980s and establishment defenders who respond that merger enforcement has stood firm and even toughened since the Chicago revolution. Could the truth be somewhere in between? Available data reject the broad assertion that overall merger enforcement has declined in recent decades, but support the narrower assertion that coordinated effects enforcement has declined. We consider what this half-truth of the lax enforcement narrative might mean for antitrust reform opportunities.
        Copy to Clipboard
        bluebook citation
        webpage link
        details
        Market Definition, in 1 Research Handbook on Abuse of Dominance and Monopolization (Pinar Akman, Konstantinos Stylianou, & Or Brook eds., Edward Elgar )
          Published At
          Edward Elgar
          In Repositories
          SSRN
          ResearchGate
          Abstract
          Monopolization, in the United States, and abuse of dominance, in the European Union, embody different philosophies about how best to police single firm conduct in competition law. Surprisingly, their disagreement ends at market definition. Both doctrines define relevant markets by similar processes and use relevant markets for similar purposes. In some contexts, this type of agreement would be a welcome sight. Here, it reflects a pocket of confusion in each area of law. This chapter describes the confusion of current market definition practices and takes some initial steps toward a more coherent approach.
          Copy to Clipboard
          bluebook citation
          webpage link
          details
          Permutation Tests for Experimental Data (with C. Holt), 26 Experimental Economics 775 (2023)
            Published At
            SpringerLink
            In Repositories
            SSRN
            ResearchGate
            Kudos
            Abstract
            This article surveys the use of nonparametric permutation tests for analyzing experimental data. The permutation approach, which involves randomizing or permuting features of the observed data, is a flexible way to draw statistical inferences in common experimental settings. It is particularly valuable when few independent observations are available, a frequent occurrence in controlled experiments in economics and other social sciences. The permutation method constitutes a comprehensive approach to statistical inference. In two-treatment testing, permutation concepts underlie popular rank-based tests, like the Wilcoxon and Mann-Whitney tests. But permutation reasoning is not limited to ordinal contexts. Analogous tests can be constructed from the permutation of measured observations—as opposed to rank-transformed observations—and we argue that these tests should often be preferred. Permutation tests can also be used with multiple treatments, with ordered hypothesized effects, and with complex data-structures, such as hypothesis testing in the presence of nuisance variables. Drawing examples from the experimental economics literature, we illustrate how permutation testing solves common challenges. Our aim is to help experimenters move beyond the handful of overused tests in play today and to instead see permutation testing as a flexible framework for statistical inference.
            Copy to Clipboard
            bluebook citation
            webpage link
            details
            Antitrust Time Travel: Entry & Potential Competition (with H. Su), 85 Antitrust Law Journal 147 (2023)
              Published At
              Antitrust L.J.
              Westlaw
              Hein
              In Repositories
              SSRN
              ResearchGate
              Abstract
              Entry defenses and potential competition doctrine have much in common. Both draw from predictions about future entry. Both demand difficult assessments of entry barriers and incentives. And both suffer from confused thinking today. This Article offers a clarifying perspective. Rather than focus on matters of litigation posture (who wins or loses if an argument is proved) we look to the type of analytical time travel being performed. Corrective entry defenses and actual potential competition theories involve exercises in forward time travel: reasoning about how future entry will impact future competition. Preventative entry defenses and perceived potential competition theories involve exercises in backward time travel: reasoning about how threats of future entry impact current competition. Grouping theories in this way reveals analytical flaws and unprincipled asymmetries in current thinking. It also exposes problems and paradoxes that beset all time travel arguments in antitrust analysis.
              Copy to Clipboard
              bluebook citation
              webpage link
              details
              Seven Myths of Market Definition, Antitrust Chronicle (Apr 2022)
                Published At
                CPI
                In Repositories
                SSRN
                ResearchGate
                Abstract
                Roughly a year into control of the federal antitrust agencies, President Biden’s antitrust team is turning its attention to policies and enforcement practices. They seem poised to start, as antitrust so often does, with market definition. This is an appropriate target for review but also perilous territory for the administration. Even slight missteps in market definition could spell disaster for broader enforcement objectives. To help policy work start from a solid foundation, this essay identifies seven common myths of market definition and explains how to avoid them.
                Copy to Clipboard
                bluebook citation
                webpage link
                details
                Modular Market Definition, 55 U.C. Davis Law Review 1091 (2021)
                  Published At
                  U.C. Davis L. Rev.
                  Westlaw
                  Lexis
                  Hein
                  In Repositories
                  SSRN
                  ResearchGate
                  Abstract

                  Surging interest in antitrust enforcement is exposing, once again, the difficulty of defining relevant markets. Past decades have witnessed the invention of many tests for defining markets, but little progress has been made, or even attempted, at reconciling these different tests. Modern market definition has thus become a confused agglomeration of often conflicting ideas about what relevant markets are and how they should be defined and used. The result—unpredictable and unreliable market boundaries—is an unsure footing for the complicated cases and policy questions now before us.

                  This Article responds to the problem of confused market definition with a simple but powerful approach to dealing with multiple tests for defining markets. The basic insight is that different tests scope markets appropriate for serving different needs. Helpful market definition can thus proceed in two steps. First, identify the substantive purposes for which markets are being defined in a particular application. Second, select the test that defines markets most suited to serving those purposes.

                  This modular approach to market definition offers several advantages over the current conflation of different tests. First, the modular approach promises greater predictability and reliability in market definition practice. Second, it provides a more legally honest and economically coherent explanation of how the various tests for defining markets fit together. Third, it contributes to ongoing policy discussions, clarifying how relevant markets work in antitrust law and how they can be leveraged to empower more efficient and effective enforcement practices.

                  Copy to Clipboard
                  bluebook citation
                  webpage link
                  details
                  The Logic of Market Definition (with D. Glasner), 83 Antitrust Law Journal 293 (2020)
                    Published At
                    Antitrust L.J.
                    Westlaw
                    Lexis
                    Hein
                    In Repositories
                    SSRN
                    ResearchGate
                    Abstract
                    Despite all the commentary that the topic has attracted in recent years, confusion still surrounds the proper definition of relevant markets in antitrust law. This article addresses that confusion and attempts to explain the underlying logic of market definition. It does so by way of exclusion: identifying and explaining three common errors in the way that courts and advocates approach the exercise. The first error is what we call the natural market fallacy. This is the mistake of assuming that relevant markets are identifiable constructs and features of competition, rather than the purely conceptual analytic devices that they actually are. The second error is what we call the independent market fallacy. This is the failure to appreciate that relevant markets do not exist independent of any theory of harm but must always be customized to reflect the details of a specific theory of harm. The third error is what we call the single market fallacy. This is the tendency to seek some single, best relevant market, when in reality there will typically be many relevant markets that could be helpfully and appropriately drawn to aid in the analysis of a given case or investigation. In the course of identifying and debunking these fallacies, the article clarifies the appropriate framework for understanding and conducting market definition.
                    Copy to Clipboard
                    bluebook citation
                    webpage link
                    details
                    Anticompetitive Entrenchment, 68 University of Kansas Law Review 1133 (2020) (symposium)
                      Published At
                      Kansas Law Review
                      Westlaw
                      Lexis
                      Hein
                      In Repositories
                      SSRN
                      ResearchGate
                      Abstract
                      Mounting public concern with the exercise of market power in concentrated markets demands a response. While modern antitrust emphasizes the prevention of market power over reaction to its exercise, it does contain one indirect but potentially important tool for addressing problems with already existing concentration and market power: the often-overlooked theory of resistance to anticompetitive entrenchment in merger enforcement. This article explores how traditional concerns with the entrenchment of market power might be updated and reintroduced to serve as a vehicle for addressing problematic markets in the modern antitrust framework. The article explains this theory of anticompetitive entrenchment, its limits, and appropriate conditions for its use, in the context of two specific applications: (1) tacit collusion among oligopolists, and (2) the exploitation of market power by a dominant firm in a protected position.
                      Copy to Clipboard
                      bluebook citation
                      webpage link
                      details
                      Lumps in Antitrust Law, University of Chicago Law Review Online, March 2020 (symposium)
                        Published At
                        Chicago Law Review Online
                        Westlaw
                        Lexis
                        Hein
                        In Repositories
                        SSRN
                        ResearchGate
                        Abstract
                        This paper uses the framework of aggregation and separation that Lee Fennell develops in Slices and Lumps to discuss two fundamental questions of antitrust policy. First, how far does the lumpiness of trading partners dictate the limits of antitrust policy? Second, what does antitrust miss under the common practice of lumping price, consumer welfare, and allocative efficiency together? Discussion of these questions is clarified and sharpened by the vocabulary of Fennell’s framework.
                        Copy to Clipboard
                        bluebook citation
                        webpage link
                        details
                        Antitrust Amorphisms, Antitrust Chronicle (Nov 2019)
                          Published At
                          CPI (text version)
                          CPI (audio version)
                          In Repositories
                          SSRN
                          ResearchGate
                          Abstract
                          Advocates of traditional antitrust are increasingly called upon to the defend the existing framework. In doing so they face a challenge: the traditional framework is actually quite difficult to explain. The problem is not that modern antitrust involves a lot of advanced economics—though that is also true. The problem is that foundational antitrust concepts like "harm to competition" and the protection of "consumer welfare" are shockingly ill-defined. This essay highlights several of the dormant ambiguities in these concepts, and thus the obstacles that antitrust has set for itself by failing ever to fully define its terms.
                          Copy to Clipboard
                          bluebook citation
                          webpage link
                          details
                          Insincere Evidence (with M. Gilbert), 105 Virginia Law Review 1115 ()
                            Published At
                            Virginia Law Review
                            Westlaw
                            Lexis
                            Hein
                            In Repositories
                            SSRN
                            ResearchGate
                            Abstract

                            Proving a violation of law is costly. Because of the cost, minor violations of law often go unproven and thus unpunished. To illustrate, almost everyone drives a little faster than the speed limit without getting a ticket. The failure to enforce the law against minor infractions is justifiable from a cost-benefit perspective. The cost of proving a minor violation—for example, a driver broke the speed limit by one mile per hour—outstrips the benefit. But it has the downside of underdeterrence. People drive a little too fast, pollute a little too much, and so on.

                            This paper explores how insincere rules, meaning rules that misstate lawmakers’ preferences, might reduce proof costs and improve enforcement. To demonstrate the argument, suppose lawmakers want drivers to travel no more than 55 mph. A sincere speed limit of 55 mph may cause drivers to go 65 mph, while an insincere speed limit of 45 mph may cause drivers to drop down to, say, 60 mph—closer to lawmakers’ ideal. Insincere rules work by creating insincere evidence. In the driving example, the insincere rule is akin to adding 10 mph to the reading on every radar gun.

                            We distinguish insincere rules from familiar concepts like over-inclusive rules, prophylactic rules, and proxy crimes. We connect insincere rules to burdens of persuasion, showing how they offset each other. Finally, we consider the normative implications of insincere rules for trials, truth, and law enforcement. The logic of insincerity is not confined to speed limits. The conditions necessary for insincerity to work pervade the legal system.

                            Copy to Clipboard
                            bluebook citation
                            webpage link
                            details
                            Challenges for Comparative Fact-Finding, 23 International Journal of Evidence & Proof 100 () (symposium)
                              Published At
                              Sage Journals
                              In Repositories
                              SSRN
                              Abstract

                              A paradigm shift is underway in scholarship on legal fact-finding. Recent work points clearly and consistently in the direction that persuasion is the product of purely comparative assessments of factual propositions. This paper comments on the philosophical roots of this shift to a comparative paradigm. It also highlights two serious challenges for the comparative approach: (1) articulation of a coherent test of the beyond-a-reasonable-doubt standard, and (2) definition of what it means for a fact-finder to weigh an unspecific or disjunctive factual claim.

                              Copy to Clipboard
                              bluebook citation
                              webpage link
                              details
                              A Likelihood Story: The Theory of Legal Fact-Finding, 90 University of Colorado Law Review 1 ()
                                Published At
                                Colorado Law Review
                                Westlaw
                                Lexis
                                Hein
                                In Repositories
                                SSRN
                                ResearchGate
                                Abstract

                                Are racial stereotypes a proper basis for legal fact-finding? What about gender stereotypes, sincerely believed by the fact-finder and informed by the fact-finder’s life experience? What about population averages: if people of a certain gender, education level, and past criminal history exhibit a statistically greater incidence of violent behavior than the population overall, is this evidence that a given person within this class did act violently on a particular occasion?

                                The intuitive answer is that none of these feel like proper bases on which fact-finders should be deciding cases. But why not? Nothing in traditional probability or belief-based theories of fact-finding justifies excluding any of these inferences. Maybe intuition goes astray here. Or maybe something about the traditional theory of fact-finding is wrong. Arguing the latter, this article proposes a new theory of fact-finding. In contrast to historic probability and belief-based theories, this paper suggests that idealized fact-finding is an application of likelihood reasoning—the statistical analog of the ancient legal concept of the “weight of evidence” and the formal analog of modern descriptions of legal fact-finding as a process of comparing the relative plausibility of competing factual stories on the evidence.

                                This likelihood theory marks a fundamental change in our understanding of fact-finding, with equally fundamental implications for practice and procedure. The theory simplifies fact-finding, describing every burden of persuasion as an application of the same reasoning principle. It harmonizes recent scholarship on fact-finding, showing that work on the cognitive processes of fact-finders can be formalized in a comprehensive and coherent theory of the ideal fact-finding process. It explains evidentiary mores, justifying hostility to naked statistical evidence, for example. And it provides new insights into the effects of subjective beliefs on fact-finding, showing not only the harm that results from asking fact-finders to decide cases based on their personal beliefs about the facts, but also the way forward in reorienting fact-finding away from prejudice, bias, and subjective beliefs, and toward the firmer ground of the evidence itself.

                                Copy to Clipboard
                                bluebook citation
                                webpage link
                                details
                                Published At
                                Virginia Law Review
                                Westlaw
                                Lexis
                                Hein
                                In Repositories
                                SSRN
                                ResearchGate
                                Abstract

                                Of all constitutional puzzles, the nondelegation principle is one of the most perplexing. How can a constitutional limitation on Congress’s ability to delegate legislative power be reconciled with the huge body of regulatory law that now governs so much of society? Why has the Court remained faithful to its intelligible principle test, validating expansive delegations of lawmaking authority, despite decades of biting criticism from so many camps? This Article suggests that answers to these questions may be hidden in a surprisingly underexplored aspect of the principle. While many papers have considered the constitutional implications of what it means for Congress to delegate legislative power, few have pushed hard on the second part of the concept: what it means for an agency to have legislative power.

                                Using game theory concepts to give meaning to the exercise of legislative power by an agency, this Article argues that nondelegation analysis is actually more complicated than it appears. As a point of basic construction, a delegation only conveys legislative power if it (1) delegates lawmaking authority that is sufficiently legislative in nature, and (2) gives an agency sufficient power over the exercise of that authority. But, again using game theory, this Article shows that an agency’s power to legislate is less certain than it first appears, making satisfaction of this second element a fact question in every case.

                                This more complicated understanding of the nondelegation principle offers three contributions of practical significance. First, it reconciles faithful adherence to existing theories of nondelegation with the possibility of expansive delegations of lawmaking authority. Second, it suggests a sliding-scale interpretation of the Court’s intelligible principle test that helps explain how nondelegation case law may actually respect the objectives of existing theories of nondelegation. Third, it identifies novel factors that should (and perhaps already do) influence judicial analysis of nondelegation challenges.

                                Copy to Clipboard
                                bluebook citation
                                webpage link
                                details
                                Experimental Economics and the Law (with C. Holt), in 1 Oxford Handbook of Law and Economics 78 (Francesco Parisi ed., Oxford University Press )
                                  Published At
                                  Oxford
                                  In Repositories
                                  SSRN
                                  Abstract
                                  This chapter surveys the past and future role of experimental economics in legal research and practice. Following a brief explanation of the theory and methodology of experimental economics, the chapter discusses topics in each of three broad application areas: (1) the use of experiments for studying legal institutions such as settlement bargaining and adjudicative functions, (2) the use of experiments to explore legal doctrines, and (3) the use of experiments in litigation and trial strategy. The general theme of this material is a broad and versatile relationship between law and experimental economics.
                                  Copy to Clipboard
                                  bluebook citation
                                  webpage link
                                  details
                                  Published At
                                  U. Chicago Press
                                  Westlaw
                                  Lexis
                                  Hein
                                  In Repositories
                                  SSRN
                                  Kudos
                                  Abstract
                                  The U.S. legal system encourages civil litigants to quickly settle their disputes, yet lengthy and expensive delays often precede private settlements. The causes of these delays are uncertain. This paper describes an economic experiment designed to test one popular hypothesis: that asymmetric information might be a contributing cause of observed settlement delays. Experimental results provide strong evidence that asymmetric information can delay settlements, increasing average time-to-settlement by as much as 90% in some treatments. This causal relationship is robustly observed across different bargaining environments. On the other hand, results do not obviously confirm all aspects of the game-theoretic explanation for this relationship. And they suggest that asymmetric information may be only one of several contributing causes of settlement delay.
                                  Copy to Clipboard
                                  bluebook citation
                                  webpage link
                                  details
                                  Published At
                                  JCL
                                  Westlaw
                                  Lexis
                                  Hein
                                  In Repositories
                                  SSRN
                                  ResearchGate
                                  Abstract
                                  The “structural presumption” is a proposition in antitrust law standing for the typical illegality of mergers that would combine rival firms with large shares of the same market. Courts and commentators are rarely precise in their use of the word “presumption,” and there is foundational confusion about what kind of presumption this proposition actually entails. It could either be a substantive factual inference based on economic theory, or a procedural device for artificially shifting the burden of production at trial. This paper argues that the substantive inference interpretation is the better reading of caselaw and the sounder application of the laws of antitrust and evidence. By instead interpreting the structural presumption as a formal rebuttable presumption, modern merger analysis needlessly complicates the use of market concentration evidence, and may be systematically undervaluing the probative weight of this evidence. At least in this context, a formal presumption likely confers less evidentiary weight than a simple substantive inference.
                                  Copy to Clipboard
                                  bluebook citation
                                  webpage link
                                  details
                                  Published At
                                  Oxford
                                  Lexis
                                  Hein
                                  In Repositories
                                  SSRN
                                  ResearchGate
                                  Abstract
                                  The doctrine of chances remains a divisive rule in the law of evidence. Proponents of the doctrine argue that evidence of multiple unlikely events of a similar nature supports an objective, statistical inference of lack of accident or random chance on a particular occasion. Opponents argue that admissibility is improper because the underlying inference ultimately requires a forbidden form of character or propensity reasoning. Using formal probability modeling and simple numerical examples, this article shows that neither side is correct. Contrary to the claims of its proponents, the doctrine of chances provides no novel or independent theory of relevance. But contrary to the claims of its opponents, the doctrine of chances does not require character or propensity reasoning. An intuitive way to understand these properties is to interpret the doctrine-of-chances inference as a weak form of any inference that could be permissibly drawn if extrinsic events were simply bad acts for which culpability or intent were certain.
                                  Copy to Clipboard
                                  bluebook citation
                                  webpage link
                                  details
                                  Published At
                                  Animal Law Review
                                  Westlaw
                                  Lexis
                                  Hein
                                  In Repositories
                                  SSRN
                                  ResearchGate
                                  Abstract
                                  In many western countries, rising public concern for the welfare of agricultural animals is reflected in the adoption of direct regulatory standards. The United States has taken a different path, preferring a "market regulation" approach whereby consumers express their preference for agricultural animal welfare through their consumption habits, incentivizing desired welfare practices with dollar bills and obviating the need for direct government regulation. There is, however, little evidence that consumers in the United States actually demand heightened animal welfare practices at market. This article explores the failure of market regulation and the welfare preference paradox posed by consumers who express a strong preference for improved animal welfare in theory, but who do not actually demand heightened animal welfare in practice. I argue that the failure of market regulation is due to the inability of current voluntary and nonstandard animal welfare labeling practices to clearly and credibly disclose to consumers the actual treatment of agricultural animals. As a corollary, effective market regulation of agricultural animal welfare could be empowered simply by improving animal welfare labeling practices.
                                  Copy to Clipboard
                                  bluebook citation
                                  webpage link
                                  details
                                  Water Externalities: Tragedy of the Common Canal (with C. Holt, C. Johnson, and C. Mallow), 78 Southern Economic Journal 1142 ()
                                    Published At
                                    Wiley
                                    EBSCO
                                    JSTOR
                                    In Repositories
                                    Google Scholar
                                    Abstract
                                    Laboratory experiments are used to investigate alternative solutions to the allocation problem of a common-pool resource with unidirectional flow. The focus is on the comparative economic efficiency of nonbinding communications, bilateral “Coasian” bargaining, allocation by auction, and allocation by exogenous usage fee. All solutions improve allocative efficiency, but communication and bilateral bargaining are not generally as effective as market allocations. An exogenously imposed optimal fee results in the greatest allocative efficiency, closely followed by an auction allocation that determines the usage fee endogenously.
                                    Copy to Clipboard
                                    bluebook citation
                                    webpage link
                                    details
                                    An Experimental Study of Settlement Delay in Pretrial Bargaining with Asymmetric Information Dissertation: University of Virginia (advisors C. Holt and J. Pepper), UMI Pub. No. 3501713 ()
                                      Published Version
                                      ProQuest
                                      Abstract

                                      In the United States legal system, tort disputes often exhibit protracted delay between injury and settlement. That is, parties to a dispute tend to agree on settlement conditions only after engaging in lengthy legal sparring and negotiation. Resources committed to settlement negotiation are large and economically inefficient. Even small reductions in average settlement delay stand to affect large reductions in socially inefficient spending.

                                      This research contributes to the understanding of settlement delay by carefully exploring one popularly advanced hypothesis for the phenomenon: the idea that asymmetric information over the value of a potential trial verdict might help to drive persistent settlement delay. A large-scale laboratory experiment is conducted with payment-incentivized undergraduate and law school subjects. The experiment closely implements a popular model of settlement delay in which litigants attempt to negotiate settlement under asymmetric information about the value of a potential trial verdict. The experiment is designed to address two broad research questions: (i) can asymmetric information over a potential trial verdict plausibly contribute to the protracted settlement delay observed in the field, and (ii) can specific policies be identified which might mitigate the settlement delay associated with asymmetric information?

                                      In response to the first broad research question, experimental results strongly confirm the plausibility of asymmetric information contributing to settlement delay. Starting from a baseline of symmetric information, settlement delay in the laboratory is increased by as much as 95% when subjects are exposed to a controlled information asymmetry over the value of the potential trial verdict. This observation is found strongly robust to perturbations in the underlying bargaining environment.

                                      In response to the second broad research question, experimental results do not strongly confirm the capacity of reasonable policy changes to affect large reductions in settlement delay. Collected data fail to indicate that any explored reform policy obviously reduces average settlement delay, though estimators are sufficiently imprecise that substantial effects on average delay cannot be ruled out. Settlement delay in the laboratory is responsive to changes in bargaining costs, but does not obviously respond to changes in the distribution of damages available at trial.

                                      Copy to Clipboard
                                      bluebook citation
                                      webpage link
                                      details
                                      Measurement Error in Criminal Justice Data (with J. Pepper and C. Petrie), in Handbook of Quantitative Criminology 353 (A. Piquero and D. Weisburd eds., Springer )
                                        Published At
                                        Springer
                                        ProQuest
                                        In Repositories
                                        Google Scholar
                                        Abstract
                                        While accurate data are critical in understanding crime and assessing criminal justice policy, data on crime and illicit activities are invariably measured with error. In this chapter, we illustrate and evaluate several examples of measurement error in criminal justice data. Errors are evidently pervasive, systematic, frequently related to behaviors and policies of interest, and unlikely to conform to convenient textbook assumptions. Using both convolution and mixing models of the measurement error generating process, we demonstrate the effects of data error on identification and statistical inference. Even small amounts of data error can have considerable consequences. Throughout this chapter, we emphasize the value of auxiliary data and reasonable assumptions in achieving informative inferences, but caution against reliance on strong and untenable assumptions about the error generating process.
                                        Copy to Clipboard
                                        bluebook citation
                                        webpage link