Dilara Gudmunsen (née Boga)

Year of Enrollment: 
2019

⭐ External funding:

I am the recipient of The Society for Applied Philosophy (SAP) Doctoral Scholarship Award (£10,000 GBP) for the academic year 2022-23.
(Information about the grant can be found in the following link: https://www.appliedphil.org/funding/doctoral-scholarships/doctoral-schol...)

⭐ My research interests:

The philosophy of Artificial Intelligence (AI), emotions, moral responsibility, and the evolution of moral agency

⭐ My PhD research project:

My dissertation is to argue for the claim that emotions are necessary for artificial intelligence (AI) systems to meet the criteria of moral agency.

First, I start by discussing what moral philosophers mean by ‘moral agency’ and ‘morality’ and then I take an evolutionary approach to the criteria of moral agency. I discuss the literature on the evolution of moral agency and argue that the moral capacities of moral agents which enable them to make moral decisions evolved through time. I claim that moral decisions are solutions to moral problems which arise during social interaction (e.g., cooperation or competition).

Second, I focus on a specific capacity of moral agents: emotions. I argue that emotions are sources of moral evidence, so they justify moral beliefs. They help moral agents to make subjective/creative moral decisions. I argue that a type of moral evidence can be only accessible through feeling emotions, namely phenomenal (non-conceptual, more rightly pre-conceptual) evidence. I illustrate this argument with my thought experiment, namely ‘Eleanor’s Emotionless Room’, which shows that Eleanor can access further moral evidence from emotions (which was not accessible through reason). When she leaves the room and feels emotions for the first time, she realises that some of her previous moral decisions and moral values just feel wrong.

Third, I discuss the moral status of AIs and I argue that current AIs do not meet the criteria of ‘full’ moral agency, because they lack the capacity for feeling emotions which is necessary for them to be full moral agents. If future AIs have certain moral capacities such as autonomy or personhood but they lack emotions, they make worse moral decisions since they will not access the moral evidence coming from feeling emotions.

Fourth, I argue that AIs can be held (i) morally or (ii) legally responsible depending on their moral status. AIs who do not have emotions but have other moral capacities are not morally responsible, but they can be held legally responsible similar to how collective agents (like Facebook) can be held legally responsible. If future AIs can have emotions, then we can hold them morally responsible as an addition to their legal responsibility since they can have reactive emotions that create the moral practice of moral responsibility, for example, they can feel resentment or guilt about their own or others’ moral wrongdoings or can feel gratitude about other moral agents’ actions.

⭐ Other academic roles:

I am one of the co-founders of the Ethics and Technology Early-Career Group (ETEG). ETEG is dedicated to promoting a network for early career researchers working on ethics and technology. See the link below for more information on the group:
https://eteg.univie.ac.at

I am one of the organisers of the Conference by Women, Genderqueer, and Non-binary Philosophers (CW*IP). See the link below for more information on the group:
https://www.cwip-conference.com

I was a team member for the University of Vienna's "A Salon for Underrepresented Philosophers" (UPSalon) for the 2021-2022 academic year. See the link below for more information on the group:
https://upsalon.univie.ac.at

Fun Fact: I was on the TV once! Euronews interviewed me about their issue regarding killer robots: https://www.facebook.com/BogaDilara/posts/10157357141663439

Qualification

Visiting Student in Philosophy at Australian National University (August - October 2018)
MA in Philosophy at Bilkent University
BA in American Culture and Literature at Bilkent University

Supervisor