I'm currently researching the theories and kinds of agency, in particular: ‘artificial agents’. I work on the question of whether Artificial Intelligences are capable of agency. Questions about the nature of agency lead to some future challenges for philosophy of action literature. Depending on our answer to the question of whether artificial intelligence systems are capable of intentional agency, we might give a different approach to what kind of agency these machines are capable of. I believe that is why we should give a better account for ‘action’ and ‘human agency’ to explain what is fundamental to human beings which might (or might not) lack in machine agency.
Previously in my MA thesis, I discussed the ‘Responsibility Dilemma’ which is a significant issue for Just War Theory. The dilemma deals with the question of how to explain why non-combatants are not liable for lethal defensive harms despite being blameworthy. I suggested that we can overcome this dilemma by recognizing armies as corporate agents who bear liability. Collective agency literature has been paying recent attention to the possibility of collective agents who have beliefs, desires, and responsibility. I focused on a distinct type of collective: corporate agents. I claimed that an army is a separate agent from its members, who can act and make decisions on a collective level that individuals can’t. That does not only solve the ‘Responsibility Dilemma’ but it also helps us to give plausible and interesting explanations about liability and blameworthiness, in particular, I claimed that liability and blameworthiness are not coextensive. I also claimed there are two kinds of liability: ‘liability for lethal defensive harms’ and ‘liability for reparative costs’.