A partnership between UCLA School of Law and UCLA Samueli School of Engineering, the Institute for Technology, Law & Policy examines the benefits and risks presented by technologies such as artificial intelligence and machine learning, robotics, cybersecurity and digital media and communications.
These and other rapidly evolving technologies raise questions germane to the outcome of ethical and public policy issues, the applicability and utility of current laws and regulations that govern their use.
Past events (see below for event descriptions and videos)
- September 18, 2020, 2-3 PM: Is Big Tech too Big?
- October 16, 2020, 2-3 PM: Addressing the Challenges of Content Moderation
ITLP produces podcasts featuring a series of conversations with thought leaders on important topics at the intersection of technology, law, and policy. Watch or listen to the podcasts.
Who We Are
Mark Verstraete, "Inseparable Uses," forthcoming in North Carolina Law Review (2021)
Mark Verstraete & Tal Zarsky, "Optimizing Breach Notification," forthcoming in University of Illinois Law Review (2021)
Virginia Foggo and John Villasenor, “Algorithms, Housing Discrimination, and the New Disparate Impact Rule,” forthcoming in Columbia Science and Technology Law Review.
Virginia Foggo, John Villasenor and Pratyush Garg, "Algorithms and Fairness," forthcoming in Ohio State Technology Law Journal.
John Villasenor and Virginia Foggo, "Artificial Intelligence, Due Process, and Criminal Sentencing," 2020 Michigan State Law Review 295 (2020).
John Villasenor, "Soft Law as a Complement to Regulation," The Brookings Institution, July 31, 2020
Rebecca Wexler and John Villasenor, "How well-intentioned privacy laws can contribute to wrongful convictions," The Brookings Institution, February 11, 2020
John Villasenor, "Artificial Intelligence, Geopolitics, and Information Integrity," The Brookings Institution and ISPI, January 2020
John Villasenor, "Products liability law as a way to address AI harms," The Brookings Institution, October 2019
Short articles, op-eds, and blogs
John Villasenor, "Why creating an internet "fairness doctrine" would backfire," The Brookings Institution, June 24, 2020
John Villasenor, "Why Colleges Should Pool Teaching Resources," The Chronicle of Higher Education, June 4, 2020
John Villasenor, "Online college classes are here to stay. What does that mean for higher education?," The Brookings Institution, June 1, 2020
John Villasenor and Virginia Foggo, "Why a proposed HUD rule could worsen algorithm-driven housing discrimination," The Brookings Institution, April 16, 2020
John Villasenor, "Six Steps to Prepare for an Online Fall Semester," The Chronicle of Higher Education, April 8, 2020
John Villasenor, "Why I Won't Let My Classes Be Recorded," The Chronicle of Higher Education, January 10, 2020
John Villasenor, "Preparing Today's Students for an AI Future," The Chronicle of Higher Education, October 13, 2019
John Villasenor, "Deepfakes, social media, and the 2020 election," The Brookings Institution, June 3, 2019
John Villasenor and Virginia Foggo, "Algorithms and sentencing: What does due process require?," The Brookings Institution, March 21, 2019
John Villasenor, " Artificial intelligence, deepfakes, and the uncertain future of truth," The Brookings Institution, February 14, 2019
John Villasenor, "Artificial intelligence and bias: Four key challenges," The Brookings Institution, January 3, 2019
Algorithmic Criminal Justice?
A Symposium Hosted by the UCLA School of Law, January 24, 2020
About the Symposium
Algorithms are playing a growing role in both policing and criminal justice. In theory, algorithms can provide information that can help promote analytical rigor, objectivity and consistency. But they can also reflect and amplify biases inadvertently introduced by their human creators and biases present in data.
This event convened a diverse set of national thought leaders to engage with a key set of critically important questions on the proper role of algorithms in policing and in the criminal justice system. Topics addressed include, 1) approaches to identify and mitigate algorithmic bias, 2) the unique challenges and opportunities associated with the subset of algorithms that use AI, 3) ways to spur technological innovation so that the positive potential of algorithmic approaches in policing and criminal justice can be realized, while also protecting against the downsides, 4) the relative roles of the public and private sectors in developing, deploying, and ensuring the quality of new algorithmic solutions, and 5) approaches that can help ensure that algorithmic approaches enhance, rather than undermine, civil liberties.
Program and Videos
Welcoming remarks and introductions - Video
Panel 1: Creating Algorithms for Justice - Video
- Alex Alben (moderator) – UCLA
- Colleen Chien – Santa Clara University
- Eric Goldman – Santa Clara University
- Rebecca Wexler – UC Berkeley
Panel 2: Algorithmic Policing - Video
- Jeff Brantingham – UCLA
- Beth Colgan (moderator) – UCLA
- Catherine Crump – UC Berkeley
- Andrew Ferguson – American University
- Orin Kerr – UC Berkeley
Panel 3: Algorithmic Adjudication - Video
- Chris Goodman – Pepperdine University
- Sandy Mayson – University of Georgia
- Richard Re (moderator) – UCLA
- Andrew Selbst – UCLA
- Chris Slobogin – Vanderbilt University
Panel 4: Regulation and Oversight - Video
- Jane Bambauer – University of Arizona
- Gary Marchant – Arizona State University
- Ken Meyer – Los Angeles District Attorney's Office
- Mohammad Tajsar – ACLU of Southern California
- John Villasenor (moderator) – UCLA
Keynote: Commissioner Rebecca Kelly Slaughter – Federal Trade Commission - Video
In addition to the individual videos listed below you can also view the ITLP YouTube playlist.
Is Big Tech Too Big?
September 18, 2020
Transcript - Please note that the accuracy of the transcript is not guaranteed.
Companies such as Facebook, Amazon, and Google have been extraordinarily successful in building a large base of users and in acquiring market share. But are they too big?
To explore this question, the UCLA Institute for Technology, Law, and Policy (ITLP) is hosting an online panel discussion with Ashkhen Kazaryan of TechFreedom and Alex Petros of Public Knowledge, moderated by ITLP director John Villasenor. The event will explore issues such as the proper role of government in relation to large technology companies and the extent to which existing regulatory frameworks are—or are not—sufficient in light of the current dynamics of the technology sector.
The event will last one hour, and will include approximately 40 minutes of moderated discussion followed by 20 minutes of audience Q&A.
Ashkhen Kazaryan is the Director of civil liberties at TechFreedom. She manages and develops projects on free speech, content moderation, surveillance reform, and the intersection of constitutional rights and technology. Ashkhen is regularly featured as an expert commentator in news outlets across television, radio, podcasts, and print and digital publications including: CNBC, BBC, Fox DC, Politico, Axios and others. She is a board member of the Fourth Amendment Advisory Committee and an expert at the Federalist Society's Emerging Technology Working Group. Ashkhen received her Specialist in Law degree summa cum laude from Lomonosov MSU, Master of Law Degree from Yale Law School, and is completing her PhD in Law at the Law School of Lomonosov Moscow State University.
Alex Petros currently works as a Policy Counsel at Public Knowledge, where he focuses on antitrust and broader platform accountability issues. Prior to Public Knowledge, he worked for Senators Amy Klobuchar, Richard Blumenthal, Joe Donnelly, and the House Committee on Oversight and Reform. He received his J.D.,cum laude, from Georgetown University Law Center and his B.A. from Yale College in Economics and Political Science with distinction.
Addressing the Challenges of Content Moderation
October 16, 2020
Transcript - Please note that the accuracy of the transcript is not guaranteed.
Under the simplest framing, the content moderation challenges facing companies such as Facebook, Twitter, and YouTube boil down to drawing a line between acceptable and unacceptable content. But that framing masks a more complex set of questions including 1) what the goals of content moderation should be, 2) how by whom content moderation decisions should be made, and 3) how companies that operate globally should navigate the varying cultural and legal frameworks relating to the limits of acceptable online content in different jurisdictions.
To explore these issues, the UCLA Institute for Technology, Law, and Policy (ITLP) is hosting an online panel discussion with Kate Klonick of St. John's University and John Samples of the Cato Institute and Facebook's content moderation Oversight Board, moderated by ITLP director John Villasenor.
Kate Klonick is a professor at the St. John's University School of law, where her research centers on law and technology, using cognitive and social psychology as a framework. Most recently she has been studying and writing about private Internet platforms and how they govern online speech. Professor Klonick has published in the Harvard Law Review, the Georgetown Law Journal, Southern California Law Review, and Yale Law Journal, as well as in the New York Times, the New Yorker, The Atlantic, the Guardian, Lawfare, Slate, Vox and numerous other publications. She is the author of a forthcoming New Yorker article on content moderation. Klonick holds an A.B. from Brown University, a J.D. from Georgetown, and a Ph.D. in Law from Yale Law School.
John Samples is a vice president at the Cato Institute. He founded and directs Cato's Center for Representative Government, which studies the First Amendment, government institutional failure, and public opinion. Dr. Samples also serves on the Facebook Oversight Board, which hears appeals from content moderation decisions by Facebook and Instagram. Prior to joining Cato, Samples served eight years as director of Georgetown University Press. He received his PhD in political science from Rutgers University. Samples' views expressed during this event are his own and do not represent those of the Oversight Board or of Facebook.