Fear of Bot-Controlled Corporations Causes Professor’s Paper to Jump to #1

Lynn Lopucki largeTrending suddenly among students of anti-terrorist tactics: Concern that the laws governing corporations are enhancing one of the most serious threats to humanity.

In April 2017, UCLA Law Professor Lynn M. LoPucki posted a paper on SSRN, a leading Internet repository for academic research, making the case that corporations acting under existing laws in the United States and elsewhere can so thoroughly mask their controllers and purposes that regulators cannot possibly tell if they are run by bots or have "malign intent." Last week, the 69-page treatise, called "Algorithmic Entities," suddenly became the week's most-read paper on SSRN, which makes available nearly 800,000 papers published by more than 350,000 authors in dozens of academic fields.

Many of the readers who caused the spike learned of the paper through SSRN's Combatting Terrorism eJournal.

"Counter-terrorism agencies have long been concerned about the role of shell corporations in financing terrorism," says LoPucki, the Security Pacific Bank Distinguished Professor of Law at UCLA. "Algorithmic entities just take it to a new level."

LoPucki's argument goes as follows: Governments compete to charter corporations, creating havens such as Delaware, where more than 1.2 million entities worldwide have incorporated. Because competition to be the chartering authority is so fierce, the market demands concessions, and as a result corporate laws in chartering jurisdictions are lax. In particular, most chartering jurisdictions – including Delaware – do not require companies to reveal the names of the people who own and control them. That leaves prosecutors pursuing suspicious activity with no way to know who, if anyone, is behind a company. Even the chartering government doesn't know.

The U.S. government has acknowledged the problem and agreed through the 37-nation Financial Action Task Force to remedy it. But the enabling legislation has languished in Congress for nearly a decade.

As LoPucki writes, a computer-generated algorithm could exploit this system and create companies devoid of humans.  These companies would be capable of transacting business and making political contributions, and would enjoy the legal rights to counsel and speech that protect all corporations in the United States. He states that these companies also "could shut down human computing, steal and release confidential information, finance and direct terrorism, and wreak havoc by seizing control of the Internet of Things."

The threat is just now emerging, LoPucki writes. Artificial intelligence is accelerating at a dizzying rate, and LoPucki says the law of corporations is not prepared to handle it.

"Algorithms' abilities will improve until they far exceed those of humans," he writes. "What remains to be determined is whether humans will be successful in imposing controls before the opportunity to do so has passed."

The article will be published in Washington University Law Review next month.