Dilara Gudmunsen

Year of Enrollment: 
2019

- External funding: I am the recipient of The Society for Applied Philosophy (SAP) Doctoral Scholarship Award (£10,000 GBP) for the academic year 2022-23. Information about the grant can be found at the following link: https://www.appliedphil.org/funding/doctoral-scholarships/doctoral-schol...

- My research interests: The philosophy of artificial intelligence (AI), the evolutionary approach to agency and moral agency, emotions, moral responsibility, and social ontology (in particular, collective agency)

- My PhD research project: My dissertation is to argue for the claim that emotions are necessary for artificial intelligence (AI) systems to meet the criteria of full-fledged moral agency.

I start by discussing what moral philosophers mean by ‘moral agency’ and ‘morality’ and then I take an evolutionary approach to the criteria of moral agency. I discuss the literature on the evolution of moral agency and argue that the moral capacities of moral agents which enable them to make moral decisions evolved through time. I claim that moral decisions are solutions to moral problems which arise during social interaction (e.g., cooperation or competition). Second, I focus on a specific capacity of moral agents: emotions. I argue that emotions are sources of moral evidence, so they justify moral beliefs. They help moral agents to make subjective/creative moral decisions. I argue that a type of moral evidence can be only accessible through feeling emotions, namely phenomenal (non-conceptual, more rightly pre-conceptual) evidence. Third, I discuss the moral status of AIs and I argue that current AIs do not meet the criteria of ‘full’ moral agency, because they lack the capacity for feeling emotions which is necessary for them to be full moral agents. If future AIs have certain moral capacities such as reasoning but they lack emotions, they make worse moral decisions since they will not access the moral evidence coming from feeling emotions. Fourth, I argue that AIs can be held (i) morally or (ii) legally responsible depending on their moral status. AIs who do not have emotions but have other moral capacities are not fully morally responsible. Still, they can be held responsible in some ways, similar to how collective agents (like Facebook) can be held responsible. If future AIs can have emotions, then we can hold them fully morally responsible because their reactive emotions can allow us to hold them responsible if further ways, because they can feel guilty about their own moral wrongdoings or can feel gratitude about other moral agents’ praiseworthy actions.

- Other academic roles: I am one of the co-founders of the Ethics and Technology Early-Career Group (ETEG) (https://eteg.univie.ac.at). ETEG is dedicated to promoting a network for early career researchers working on ethics and technology. Previously, I was one of the organisers of the Conference by Women, Genderqueer, and Non-binary Philosophers (CW*IP) (https://www.cwip-conference.com) between 2020-2023. I was a team member for the University of Vienna's "A Salon for Underrepresented Philosophers" (UPSalon) (https://upsalon.univie.ac.at) for the 2021-2022 academic year.

- Fun Fact: I was on the TV once! Euronews interviewed me about their issue regarding killer robots: https://www.facebook.com/BogaDilara/posts/10157357141663439

Qualification

Visiting Student in Philosophy at Australian National University (August - October 2018)
MA in Philosophy at Bilkent University
BA in American Culture and Literature at Bilkent University

Supervisor