Exploring the Role of AI in Government

An American Flag waves outside U.S. Capitol Building

ALBANY, N.Y. (Nov. 16, 2021) — Artificial Intelligence, or AI, is a term first coined in the 1950s that speaks to the ability of machines to carry out tasks by displaying intelligent, humanlike behavior, or a machine’s capacity to behave as an intelligent agent by perceiving its environment and taking actions to achieve some goals.

In a modern sense, researchers and developers are interested in how AI applications can augment labor and increase productivity to more effectively allocate resources and foster innovation.

But with this innovation comes the potential for misuse, where AI becomes a tool for bad actors to steal personal data or proprietary information or manipulate unknowing victims.

With the increased use of AI applications by public services and agencies, researchers at the University at Albany have taken on lead roles in producing a special issue on Artificial Intelligence in Government for Social Science Computer Review.

The editors for the publication include UAlbany’s Center for Technology in Government Director J. Ramon Gil-Garcia, UAlbany Associate Vice President for Research Theresa A. Pardo, and Professor of Digitalization Rony Medaglia of the Copenhagen Business School.

Theresa Pardo and Ramon Gil-Garcia of CTG UAlbany
Special issue editors Theresa A. Pardo and J. Ramon Gil-Garcia

“As a new series of technologies and techniques, AI needs to be understood, and where appropriate, harnessed by government agencies,” said Gil-Garcia, who also serves an associate professor of Public Administration and Policy at UAlbany’s Rockefeller College of Public Affairs and Policy. “Making decisions about when to use AI and for which purposes will become an imperative for governments around the world.”

The introductory article to the special issue presents an overview of some of the main policy initiatives across the world in relation to AI in government and discusses the state of existing research.

Based on an analysis of current trends in research and practice, the introduction to the special issue highlights four areas to be the focus of future research on AI in government: governance of AI, trustworthy AI, impact assessment methodologies and data governance.

“Many questions remain about how decisions guiding the use of AI in the public sector are made, as well as the potential positive and negative consequences on people,” said Pardo, who also serves as a CTG UAlbany senior fellow and research professor of Public Administration and Policy at Rockefeller College. “This special issue focuses on understanding the governance of AI and how to improve it to generate public value.”

Among the themes covered, the special issue noted that future research on AI and government will need to focus on devising novel governance models to face diverse challenges, by identifying best practices, and updating existing governance models (e.g., adaptive governance) to the unique characteristics of AI.

Examples of research questions include: What governance models (e.g., for AI talent recruitment and AI technology procurement) can be devised to meet ethical challenges and, at the same time, ensure the effectiveness of AI solutions?

The special issue also noted the opportunities coming from the innovative disruptive power of AI in the public sector are primarily found in three areas: 1.) improving the internal efficiency of public administration, 2.) improving public administration decision making, and 3.) improving citizen-government interaction, including the provision of better and more inclusive services and the enhancement of citizen participation in the activities of the public sector.

As a potentially disruptive sociotechnical phenomenon, AI is relevant to the full range of government’s roles: as a regulator and as a catalyst for research and development (governance of AI) and as a user (governance with AI or AI in government).

“Such potential could be realized if governments foster an environment characterized by a skilled workforce, an appropriate regulatory framework, resources that can be promptly mobilized and incentives to innovate,” continued Gil-Garcia. “Risks of AI, on the other hand, include, for example, widening societal divides, infringing citizens’ privacy rights, and clouding the accountability of public decision makers. Such risks require thoughtful strategies and regulation in order to be avoided or mitigated.”

Highlights of the special issue include “Overcoming the Challenges of Collaboratively Adopting Artificial Intelligence in the Public Sector,” coauthored by CTG Research Director Mila Gascó-Hernandez. The article focuses on how, despite the popularity of AI, a resistance to sharing data due to privacy and security concerns, a lack of alignment between project goals and expectations around data sharing and insufficient engagement across organizational hierarchies serve as major impediments to realizing the potential benefits of AI.

The researchers explored ways to overcome these challenges, which could include working on-site, presenting the benefits of data sharing, reframing problems, designating joint appointments and boundary spanners, and connecting participants in the collaboration at all levels around project design and purpose.

In, “Cultivating Trustworthy Artificial Intelligence in Digital Government,” CTG UAlbany faculty fellows Teresa M. Harrison and Luis Felipe Luna-Reyes discuss how public trust in AI must be cultivated and sustained; otherwise, useful AI systems may be rejected and government decision making may lose its legitimacy. They argue that public trust can be achieved when AI development takes place in contexts characterized by policies situated firmly in democratic rights and supported by well-documented and fully implemented governance practices.