Can Artificial Intelligence be a Kantian Moral Agent?

    As the leading innovation of this century, the growth and development of Artificial Intelligence (AI) has set the tone for the future of the world and, accordingly, humanity. Its potential benefits, difficulties, dangers, achievements, and shortcomings have led to the development of a new form of intelligence technology with its unforeseen future, which begs the question if it is a threat to humanity.

    However, what is it exactly that we should fear the most, a nuclear bomb or AI? Although this paper will argue whether AI would be counted as a greater threat to humanity than a nuclear bomb, since the latter’s consequences are known, its usage is of concern to many. On the other hand, the potential impact of AI is not yet realised nor held accountable beforehand, making necessary precautions undeserved. There is a peculiar unforeseeable essence in hindsight; only with time, precise predictions can be made for a framework of boundaries and precautions to develop. In Superintelligence: Paths, Dangers, Strategies, Bostrom writes: “We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound (Bostrom, 2014).” This sound set a tone after OpenAI, established as a non-profit organisation to investigate possible risks of AI, developed and released ChatGPT-4, with an estimated value of around 13 billion dollars.

    Download the Discussion Paper

    Latest Articles

    Related Articles