Beyond the Algorithm: Philosophy’s Role in Shaping the AI Era

A woman wearing glasses and a denim shirt smiles at the camera
Alessandra Buccella PhD, Assistant Professor of the Department of Philosophy
  1. Tell us a little about yourself and your role at UAlbany.

I’ve been an Assistant Professor in the Philosophy department since 2023. I grew up in Italy, but I have been in the U.S. since 2014. My main area of research currently is the ethics and philosophy of AI, but I also work in the philosophy of cognitive neuroscience and philosophical psychology. Outside of work, I am a big basketball fan (I played competitively until after college) and love skiing.

  1. How is the rapid development of generative AI (like ChatGPT or image generators) reshaping traditional ethical frameworks, and what role can philosophers play in shaping public discourse or policy around these technologies?

The rapid pace at which generative AI technologies are developing is putting a lot of pressure on many ethical principles that people used to simply take for granted. For example, we tend to assume that if an agent says or does something that, directly or indirectly, causes harm to someone else, the agent bears some kind of responsibility and can be held accountable for the consequences of her decisions. However, systems like ChatGPT are not agents in the same sense as human beings, and yet they can produce novel content and make autonomous decisions that have real impacts on people’s lives. But because they are not really part of our society and are not bound by the same ethical obligations human beings have towards each other, it is not clear that they can actually be held responsible for anything they generate or the potentially harmful consequences. But if the AI systems are not responsible for what they create completely on their own, then who is? Knowing how to answer questions like this is important because with every “accountability vacuum” comes the risk of someone exploiting it for personal gain or to hurt others with impunity.

As philosophers, we are trained to ask foundational questions about the nature of agency, responsibility, and what makes human beings accountable to each other. Going back to these foundational questions in the age of AI is key to make sure that these technologies are deployed safely and the potential harms associated with them are minimized.

  1.  In your view, what are the most urgent ethical challenges we face when machines begin to emulate human capacities such as empathy, decision-making, or even moral reasoning?

Alongside dealing with the accountability vacuums I mentioned above, I think the most pressing ethical challenge humanity faces right now in the context of AI is determining which applications of AI are compatible with human flourishing and which ones are not. Like other disruptive technologies in the past, AI has the potential to radically reshape the standards of wellbeing in society and the conditions in which individuals are granted certain fundamental rights. We have to make sure that, whenever we decide to incorporate AI into some aspect of everyday life, we ask ourselves: Is it really going to make our life better? Are the benefits accessible to everyone or just a privileged few? And if something goes wrong, who will be impacted the most? Ironically, these are not really questions about AI itself, and AI cannot answer them for us. They are questions about us, about who we are as people and what we want the future to look like. AI is not raising entirely new ethical challenges, but it is certainly forcing us to take the ever-existing ones even more seriously.

  1.  What do you see as the most exciting or underexplored philosophical questions at the intersection of ethics and AI today—and how might graduate students get involved in tackling them?

We are starting to see some engagement with questions around responsibility and the best ways to ensure that AI does not reinforce existing inequalities but instead is used as a tool to increase access to resources and opportunities around the world. However, thinking philosophically about AI can also help us discover more about how humans think and what moves them. Personally, I find the various ways in which people of different ages and backgrounds relate to generative AI chatbots fascinating. I also like to think about the beliefs, values, and attitudes that underlie and influence the outputs of AI chatbots themselves: since these systems are trained on mostly human data, they are effectively a representation of our collective mindset and social consciousness. More generally, there is a lot that we have not yet explored in the context of human-AI interactions and relationships. Because the ways in which humans interact and relate to the world around them are virtually limitless, this topic is something that can be tackled by many disciplines and from many perspectives.

  1. For students considering graduate study in Philosophy at UAlbany, how do topics in ethics and philosophy of AI offer new avenues for interdisciplinary research, and how is the department fostering these connections?

The Philosophy department is very involved in the activities of the AI+ Institute and the newly established AI & Society College and Research Center. Graduate students in our department have many opportunities for interdisciplinary collaborations, research funding, and participation in events like seminars, conferences, etc. Moreover, Professors Jason D’Cruz, P.D. Magnus, and myself, as well as some of our current graduate students, already have an ongoing collaboration with a team of researchers in Human-Computer Interaction at IBM.