May 19, 2019
|Richard M. Re, Alicia Solow-Niederman
Stanford Technology Law Review, UCLA School of Law, Public Law Research Paper No. 19-16
Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of "robot judges" suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where "equitable justice," or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward "codified justice," an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.
Access "Developing Artificially Intelligent Justice" at SSRN.