Economists and AI: A Missing Piece in the Puzzle?

Despite the seemingly dramatic title, this post seeks to prompt a discussion about the apparent underrepresentation of economists in modern Artificial Intelligence (AI) research. While there are undoubtedly economists who are leaving their mark in the field, their visibility within the AI community appears to be dwindling. As I revisited some of the foundational literature on AI, the conspicuous absence of economists in recent AI discourses piqued my interest.

AI's genesis was fundamentally interdisciplinary, with early contributions from a multitude of fields, including mathematics, cognitive psychology, and economics. Yet, as the field matured, the multidisciplinary character of AI seems to have faded, with computer scientists now predominantly driving the discourse. This led me to ponder: where have all the economists in AI research vanished to, and what insights are we potentially missing without their unique perspective?

 Key points:

  • The article underscores the crucial but somewhat overlooked role of economists in AI research, highlighting how early pioneers used economic thinking to shape foundational theories of AI. However, as AI research has grown increasingly dominated by computer science, this multidisciplinary approach has been somewhat sidelined.

  • As AI systems advance, defining their objectives or preferences becomes a significant challenge. Economic theory, with its rich understanding of preferences, offers potential solutions. It can provide methods for preference representation, elicitation, and aggregation, which are critical for AI systems that need to cater to multiple stakeholders.

  • Beyond the technical aspect, determining the nature of AI's preferences raises profound ethical questions, such as whether AI's preferences should mirror human users' or represent a broader societal consensus. Economists' expertise in policy-making, institutional design, and assessing societal implications can provide invaluable guidance through these complex ethical and political challenges.

The Dawn of AI: The Pivotal Role of Economic Thought

AI's formative years can be traced back to the landmark Dartmouth conference in 1956, where a diverse group of scholars, including mathematicians, engineers, and cognitive psychologists, gathered to shape the future of what would eventually become AI. Two figures from this pioneering group stand out for their unique cross-disciplinary approach: Herbert Simon and Allen Newell. Their integration of economics and nascent computational theory laid a strong foundation for the AI we know today.

 Simon, originally a political scientist, later emerged as a prominent economist, earning a Nobel Prize in Economic Sciences. His groundbreaking work on bounded rationality, suggesting that human decision-making is constrained by information availability, cognitive limitations, and time, left an indelible mark on both economics and AI (Simon, 1955). Simon's perspective on intelligent behavior as an optimization problem, despite possibly unknown utility functions and cognitively taxing optimization processes, has profoundly influenced AI design and remains relevant today.

 Similarly, Newell made significant strides by incorporating economic ideas into system designs for complex problem-solving. Together with Simon, he developed innovative programs like the Logic Theorist and the General Problem Solver, aimed at emulating human problem-solving skills (Newell & Simon, 1956). They championed the notion that human cognition could be modeled as information processing systems, a fundamental concept still at the heart of AI research.

These early pioneers wove economic reasoning into the fabric of AI, demonstrating the value of concepts like rationality, optimization, and decision-making under uncertainty in understanding intelligent systems. This historical intersection of economics and AI underscores the potential of interdisciplinary collaboration in advancing AI research, highlighting the need for more economist involvement in contemporary AI discourse.

The Current Landscape: A Focus Shift in AI Research

Today, AI is an integral part of our lives, having evolved from a daring concept to a pervasive reality. This rapid transformation has been largely driven by breakthroughs in computer science, with advanced machine learning algorithms and computational power at the forefront. As a result, the contemporary AI landscape has been chiefly shaped by computer scientists.

 In particular, deep learning, a subset of machine learning, has garnered significant attention. This approach, inspired by the human brain's neural architecture, has proven remarkably successful in diverse tasks, from image recognition to natural language processing. Consequently, computer scientists, with their deep understanding of algorithms and computational systems, have become the dominant force in the AI field.

 Yet, this prevailing focus on computer science in AI research risks narrowing the field's intellectual scope. While AI's origins are firmly rooted in a rich interdisciplinary mix, it now increasingly reflects the perspectives of a single academic discipline. Such a narrowing of perspectives not only risks undervaluing contributions from other fields but also limits the breadth of insights and innovations that a diverse academic community can provide.

Despite the undeniable progress computer science has brought to AI, it is crucial to appreciate the potential value of a more integrative and interdisciplinary approach. As AI's influence and implications continue to expand, welcoming and incorporating insights from a broader range of disciplines is not just desirable, but perhaps imperative for the balanced evolution of the field.

Objectives as Preferences: The Interplay of Economics and AI

The trajectory of AI research calls for an in-depth exploration into the philosophical makeup of AI agents, specifically their objectives or metaphorically speaking, their 'preferences.' As AI systems grow in their ability to act independently and make complex decisions, the task of defining these preferences, which essentially guide AI behavior and calibrate the trade-offs it makes, becomes increasingly relevant (Russell, 2019).

 The field of economics, with its extensive exploration of the concept of preferences, can provide valuable insights. Preferences in economic theory are not a simple catalogue of likes and dislikes, but instead offer a comprehensive framework for evaluating and comparing different outcomes. This framework is built on complex, mathematical tools, from utility functions to indifference curves.

 The wealth of knowledge from economics presents a promising opportunity for AI. Viewing an AI's objectives as preferences paves the way for using economic literature on preference representation and elicitation. Economic principles can thus be applied to define AI's preferences in a way that balances the evaluation of various trade-offs.

 Moreover, economists have a firm understanding of preference aggregation, a crucial aspect when an AI needs to consider the interests of multiple stakeholders. This issue mirrors the challenges faced in social choice theory and welfare economics (Sen, 1970), and solutions developed in these fields could provide a valuable template for AI systems.

 The philosophical debate on whether AI agents should be given preferences can also draw upon economic insights. In economic theory, preferences are a tool for modeling the motivations and behaviors of individuals. For AI systems, having preferences or objectives is akin to a survival mechanism; without goal-oriented behavior, an AI becomes aimless and inefficient (Sutton & Barto, 2018).

However, the central challenge is not simply about endowing AI with preferences, but determining the nature of these preferences. This question extends beyond technical concerns and delves into ethical territory. Should an AI's preferences reflect those of its human users, or should they represent a broader range of societal values? Questions of fairness, the concentration of power, and protection against manipulation all intersect at this point (Crawford & Calo, 2016).

In this context, economists can offer valuable contributions not only through their theoretical models but also through their practical experience in dealing with similar challenges in policy-making and institutional design. Their expertise in analyzing incentives, predicting strategic behavior, and assessing the societal implications of different regulations could be crucial in navigating these intricate ethical and political challenges.

The task of assigning preferences to AI systems presents a complex philosophical and economic challenge. Incorporating economic principles into AI research can offer significant progress in addressing these questions, fostering the development of AI systems that are technically advanced while also aligning with human values and societal objectives.