Intelligent machines – from automated robots to algorithmic systems – can create images and poetry, steer our preferences, aid decision making, and kill. Our perception of their capacities, relative autonomy, and moral status will profoundly affect not only how we interpret and address practical problems in world politics over the next 50 years but also how we prescribe and evaluate individual and state responses. In this article, I argue that we must analyse this emerging synthetic agency in order to effectively navigate – and theorise – the future of world politics. I begin by outlining the ways that agency has been under-theorised within the discipline of International Relations (IR) and suggest that artificial intelligence (AI) disrupts prevailing conceptions. I then examine how individual human beings and formal organisations – purposive actors with which IR is already familiar – qualify as moral agents, or bearers of duties, and explore what criteria intelligent machines would need to meet to also qualify. After demonstrating that synthetic agents currently lack the ‘reflexive autonomy’ required for moral agency, I turn to the context of war to illustrate how insights drawn from this comparative analysis counter our tendency to elide different manifestations of moral agency in ways that erode crucial notions of responsibility in world politics.