My research interests are in philosophy of Artificial Intelligence (AI), moral agency, evolution of cooperation, sociality, and responsibility.
I believe clarification of what is fundamental to human agency is crucial for greater understanding of ontological and moral status of AIs.
In my dissertation, I will bring in philosophy of science, in particular the evolution of cooperation and sociality. I argue that cooperation is fundamental to moral agency. It is no surprise that cultures consider cooperative behaviour as 'morally good'. So, to be able to have social interactions with humans, AIs need to have moral expressions of themselves (such as social emotions) to signal cooperative or competitive behaviour; otherwise, AIs are likely to be perceived as just moral 'tools', rather than moral 'agents', and they cannot be trusted. The outcomes of AIs can be consistent (in this sense 'reliable') but AIs cannot be responsible for these outcomes, since real trust between two moral agents requires their ability to 'choose'. I claim that current AIs lack the ability to 'choose' for themselves (what I call 'moral choice') because they do not have self-interest as a reaction to the collective interest.
Other academic roles:
• I am one of the co-founders of the Ethics and Technology Graduate Group (ETGG). EETG is dedicated to promoting a network for early career researchers working on ethics and technology. See the link below for more information on the group:
• I am a team member of the organising committee of the Conference by Women in Philosophy (CWIP). See the link below on the committee and conference: