I'm currently researching the theories and kinds of agency, in particular: 'artificial agents'. I work on the question of whether Artificial Intelligences are capable of agency. Questions about the nature of agency lead to some future challenges for philosophy of action literature, and if artificial autonomous systems are not capable of intentional agency, they may still be capable of some kind of agency. Therefore, we should say more about the intentionality of human action and how it differs.
Previously in my MA thesis, I discussed the ‘Responsibility Dilemma’ which is a significant issue for Just War Theory. The dilemma deals with the question of how to explain why non-combatants are not liable for lethal defensive harms despite being blameworthy. I suggested that we can overcome this dilemma by recognizing armies as corporate agents who bear liability. Collective agency literature has been paying recent attention to the possibility of collective agents who have beliefs, desires, and responsibility. I focused on a distinct type of collective: corporate agents. I claimed that an army is a separate agent from its members, who can act and make decisions on a collective level that individuals can’t. That does not only solve the ‘Responsibility Dilemma’ but it also helps us to give plausible and interesting explanations about liability and blameworthiness, in particular, I claimed that liability and blameworthiness are not coextensive. I also claimed there are two kinds of liability: ‘liability for lethal defensive harms’ and ‘liability for reparative costs’.