⭐ External funding:
I am the recipient of The Society for Applied Philosophy (SAP) Doctoral Scholarship Award (£10,000 GBP for the academic year 2022-23)
⭐ My research interests:
The philosophy of Artificial Intelligence (AI), emotions, the evolution of moral agency, responsibility, and social ontology
⭐ My PhD research project:
My dissertation is to argue for the claim that emotions are necessary for artificial intelligence (AI) systems to meet the criteria of moral agency.
First, I start with discussing what moral philosophers mean by ‘moral agency’ and ‘morality’ and then I take an evolutionary approach to the criteria of moral agency. I discuss the literature on the evolution of moral agency and argue that moral capacities of moral agents which enable them to make moral decisions evolved through time. I claim that moral decisions are solutions to moral problems which arise during the social interaction (e.g., cooperation or competition).
Second, I focus on a specific capacity of moral agents: emotions. I argue that emotions have both epistemic and moral roles, and they help moral agents to make subjective/creative moral decisions. I argue that some morally relevant information can be only accessible through feeling emotions, namely phenomenal (non-conceptual) information. In line with broader developments in the philosophy of mind, the phenomenal and intentional aspects of emotions are started to be viewed as essentially intertwined, emotions are intentional feelings or, in Peter Goldie’s (2000) terms, ‘feelings towards’. I argue that the phenomenal dimension of emotion displays a reactive character that distinguishes it from perceptual experience, such as reactive emotions (what Strawson calls ‘reactive attitudes’) that are necessary for moral understanding. I illustrate this argument with my thought experiment, namely ‘Eleanor’s Emotionless Room’, which shows that Eleanor learned some further morally relevant information that she did not have in the ‘emotionless’ room when she feels emotions for the first time after she leaves the emotionless room. Eleanor did not feel any emotions meanwhile reading certain books from many fields and reasoning about moral matters, in this sense the room was emotionless. When she leaves the room and feels emotions for the first time, she realises that some of her previous moral decisions and moral values just feel wrong.
Third, I discuss the moral status of AIs and I argue that current AIs do not meet the criteria of ‘full’ moral agency, because they lack emotions which are necessary for them to be full moral agents. If the future AIs have certain moral capacities such as autonomy or personhood but they lack emotions, they make worse moral decisions since they will not access the morally relevant information coming from having emotions.
Fourth, I argue that AIs can be held (i) morally or (ii) legally responsible depending on their moral status. AIs who do not have emotions but have other moral capacities are not morally responsible, but they can be held legally responsible similar to how collective agents (like Facebook) can be held legally responsible depending. If future AIs can have emotions, then we can hold them morally responsible as an addition to their legal responsibility since they can have reactive emotions that create the moral practice of moral responsibility, for example, they can feel resentment or guilt about theirs’ or others’ moral wrongdoings or feel gratitude about other moral agents’ actions.
⭐ Other academic roles:
I am one of the co-founders of the Ethics and Technology Early-Career Group (ETEG). ETEG is dedicated to promoting a network for early career researchers working on ethics and technology. See the link below for more information on the group:
I am one of the organisers of the Conference by Women, Genderqueer, and Non-binary Philosophers (CW*IP). See the link below for more information on the group:
I am a team member for the University of Vienna's "A Salon for Underrepresented Philosophers" (UPSalon). See the link below for more information on the group: