An eye for AI: In the age of artificial intelligence, who will win the race between governance and innovation?
By Deepika Chhillar
Artificial intelligence is introducing significant structural and institutional changes by challenging existing laws and regulations, industry practices (e.g., standard settings), emerging organizational forms (e.g., platforms), new forms of labor (e.g., algorithmic auditors), and leadership roles (e.g., shifting authority regimes and new professional roles). In our article, we've talked about how we can harness AI's power through governance so as to help solve societal problems without amplifying existing inequalities or creating new ones. Based on a review of contemporary research, we address three fundamental governance questions: What are the sources of AI’s bias and opacity? Why do we need AI to be fair and transparent? And most importantly, what can organizations and policy makers do to mitigate AI’s pitfalls as they design and approve AI-based products and services?
AI applications are leaving research labs and entering our workplaces and personal lives at an accelerating rate. Over the past decade, we've seen some phenomenal success stories in AI, with people realizing many tasks are better done by machines than by humans - like credit card fraud detection, weather forecasting, or playing Go against DeepMind's AlphaGo. However, some AI-based applications, such as the recent crashes of self-driving cars or racist chatbots, are seen to act in unintended ways engendering distrust of AI. Considering we are in an age of Weak AI where we have a higher degree of control on AI’s limited functionality, we have not yet formed effective governance and regulatory practices for its fair, equitable, and accountable deployment (Hosanagar, 2020; Kellogg & Valentine, 2020). In order to build trustworthy AI, we must invest human resources in its governance.
We extend Lessig’s (1998) New Chicago School theory to the governance of AI. We propose that governance of AI occurs at the intersection of four modalities: the law, the market, social norms, and the architecture of AI models. Each of these modalities regulates the AI ecosystem by imposing constraints on data collection, algorithmic manipulation, or application use. A good deal of alignment is found across the governance modes, that is, between the regulations that policy makers develop and legislatively approve, the market rules that determine the costs of non-compliance, the societal norms that legitimize an appropriate way to conduct business, and ultimately the technological and infrastructure rules that govern artificial intelligence. Thus, even though each of these modalities governs AI well on its own, they govern more effectively when combined.
Overall, the governance mindset that worked for other technologies probably would not suffice for AI and the rapid technological advancements that it is bringing. In our article, we discuss the challenges posed by AI-based technologies, highlight the need to govern unbridled AI adoption, and illustrate ways to do so. We believe that research has the potential to help our society in its technological innovation and adoption journey.
Article Details
An Eye for Artificial Intelligence: Insights Into the Governance of Artificial Intelligence and Vision for Future Research
Deepika Chhillar and Ruth V. Aguilera
First published online April 29, 2022
DOI: 10.1177/00076503221080959
Business & Society
About the Author