Taming AI with morals? Skeptical notes from a sociologist
Published 06 October, 2025
Sociologist Volker H. Schmidt from the Department of Sociology and Anthropology, National University of Singapore, examines whether governments and ethics can contain advanced AI’s risks—and ends feeling skeptical.
“If the risks that leading experts say are intrinsic to advanced AI systems are real, then there is little reason to expect that they can be contained, be it by political or ethical means,” says Schmidt.
His article, published in the KeAi journal Risk Sciences, catalogs risks ranging from existential threats. “Perhaps the greatest risk identified by technically competent AI specialists is extinction of the species by AIs that massively exceed human cognitive capacities,” notes Schmidt. “Leading to disruptive harms including autonomous weapons, engineered pandemics, large-scale manipulation via deepfakes, discrimination, financial scams, cyber theft, political disruption, and unprecedented surveillance.”
Schmidt argues regulation is unlikely to keep pace or scale. “Given the enormous speed at which AI is developing, legislation may lag, rendering it obsolete by the time it comes into effect.”
Schmidt states that effective rules would need global agreement and robust enforcement—both improbable amid geopolitical competition and collective action problems. “Technological trends further complicate control. AIs can be mass produced, are becoming cheaper and can even be downloaded for free from the internet, and increasingly run on ubiquitous devices,” adds Schmidt.
Ethics, he warns, is indeterminate and weak as a constraint.
“Ethics has its own risks. Inflationary moralization only covers up our puzzlements and anxieties in the face of disquieting uncertainties, while potentially flaring up conflict rather than defusing it,” he says. “Moral norms lack reliable enforcement, vary across traditions, and can legitimize conflicting courses of action.”
Grounded in systems theory, Schmidt contends states are fragmented, self-referential, and constrained by their own rationalities—limiting the coherent, society-wide steering many assume is possible. He concludes with a caution, “Meaning well is not the same as getting ‘good’ results.”
Contact author:
Volker H. Schmidt; Department of Sociology and Anthropology, National University of Singapore, 11 Arts Link, 117570, Singapore; E-mail: socvhs@nus.edu.sg
Conflict of interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
See the article:
Schmidt, V. H. (2025). Taming AI with morals? Skeptical notes. Risk Sciences, 1, 100016. https://doi.org/10.1016/j.risk.2025.100016