On May 4 and 5, 2018, UCLA School of Law’s Project on Artificial Intelligence, an initiative of the
Program on Understanding Law, Science, and Evidence (PULSE), held its first workshop, on “Artificial Intelligence in Strategic Context: Development Paths, Impacts, and Governance.” More than 20 experts — including scholars from law, technology, mathematics and social science as well as employees of major AI research firms — participated.
Rapid, highly visible advances in artificial intelligence and machine learning (AI/ML) over the past few years have brought a surge of interest in the potentially transformative impacts of these technologies as they disrupt existing markets, behaviors and societal arrangements.
Like many major technological advances, AI/ML holds the prospect of both large benefits and risks, or for complex mixtures of benefit and harm, depending how new capabilities are developed, applied, and embedded in complex socioeconomic systems. AI/ML may be unique in its vast range of potential impacts, from current challenges of integrating present capabilities into existing political and economic systems to future prospects of developing general AI or even super-intelligent systems that may threaten human identity, autonomy or survival. Rapid current progress suggests that societal challenges will increase in diversity and intensity, to the point that they may well be socially transformative even in the absence of general or super-intelligent AI.
Scholars have addressed AI’s near- and medium-term societal challenges with a growing body of fruitful work, mainly of two types: studies of specific identified challenges already evident from current and impending advances, such as bias, threats to privacy, vulnerability of critical infrastructure, and livelihood displacement; and investigations of technical elements or characteristics of AI/ML system design that might influence risks, such as explainability, resistance to hacking, or suitably defined reward functions.
This workshop offered an opportunity to explore an alternative approach, complementary to these two bodies of current work, focused on the strategic context of AI development: the actors who develop and apply AI capabilities and their goals, incentives, capabilities, institutional settings and interrelationships. These elements of strategic context are likely to influence the technical capabilities and applications that are developed, the societal impacts of these advances, and the possible form, feasibility and limits of governance. This perspective may prove to be especially fruitful in examining impacts and governance issues that lie between already emergent issues and potential future existential risks, by exploring how present choices and decision contexts might influence future AI/ML development pathways. This line of inquiry can inform studies of both nearer and longer-term impacts of AI/ML, and draw attention to areas of inquiry that have previously received relatively limited attention.