Absolute Vulnerability and Artificial Intelligence (AI)
A postscript for my upcoming monograph
The following is a postscript that I intend to append to my forthcoming monograph with Fordham University Press, tentatively entitled Absolute Vulnerability: a theological anthropology for a global community.
The explosion in the popular use of AI since the release of ChatGPT in November 2022 has heightened western cultural anxiety around a question humans have speculated about for centuries: could technological artefacts, made to replicate human behavior and activity, pose an existential threat to human beings themselves?[1] Responses range from the pessimism and caution from figures as diverse as Stephen Hawking and Elon Musk, to transhumanist optimism,[2] and many shades of opinion in between.
From a Christian theological standpoint, anxiety over the (perceived) existential threat of AI stems from uncertainty about the imago Dei—the fact that, as Marius Dorobantu puts it, “the jury is still out in regard to deciding what exactly … renders human in the image and likeness of God.”[3] As I noted at the beginning of this book, patristic writers offered various substantive answers to this question. Since the advent of evolutionary theory in the nineteenth century, however, the landscape has broadened further, resulting in a variety of non-substantive interpretations of the imago Dei.[4] As AI continues to lay siege to cherished notions of what sets us apart from nonhumans, it provokes our angst over which of these accounts of the imago Dei will survive the challenge. Because this book has revolved around this very question, it is timely to reflect by way of a farewell on the pitfalls of theological anthropologies seeking to respond to AI, and to suggest how my presentation of the imago Dei rooted in Absolute Vulnerability offers a constructive way forward.
The first and most common pitfall is an oppositional, exclusivist mode, where the imago Dei is defined by human traits that AI could “never” achieve. The first casualty in this approach has been the traditional view locating the image in intellectual or cognitive functions that AI is able to replicate to an extent that often matches, and may one day exceed, human abilities.[5] Other accounts identify the image in relationality, consciousness, interiority, sociality, or embodiment—traits AI does not and “cannot” possess.[6] Yet the strategy of defining the image by exclusive traits is problematic. It is impossible to state with certainty what capacities AI may or may not attain in the future. Relationality, for instance, would be very difficult for AI to achieve, but who can predict with confidence that it will never do so? Theological accounts based on such exclusion are therefore always at risk of being proven wrong. At best, they defer anxiety about AI’s existential threat; they cannot resolve it. What is needed, then, is not an account of the imago Dei based on human abilities—which invoke an ideal of self-mastery always liable to collapse when rivaled or surpassed by AI—but an account grounded in the image’s exemplar: God himself.
If the first pitfall is opposition, the second—less common but still significant—is accommodation. Anne Foerst’s work is instructive here. She argues that traits-based accounts of the imago Dei are fruitless, since these traits often have material bases that could conceivably be replicated in humanoid robots such as “Cog.”[7] Instead, she contends that the imago Dei “cannot be identified with particular skills and abilities but is God’s promise to start and maintain a relationship with humans.”[8] “Let us make the human being in our image…” (Gen 1:26) becomes a divine performative act, constituting a new partnership between God and human beings as unique, dignified persons, even as humans, nonhumans, and machines remain qualitatively identical. For Foerst, this anthropology is not reductive, but rather calls humans to greater humility and to a “right understanding of dominion as a caring and respectful life shared with animals in one world.”[9]
While Foerst’s compassionate and humble vision resonates with my own argument for an ecology in which humans and nonhumans are members of a global community sharing a vulnerable nature, Foerst’s denial of qualitative distinctions between humans and nonhumans ultimately erases human uniqueness and with it, humanity’s specific vocation and responsibility. Her claim that qualitative equality fosters humility, respect, and care depends on her assertion that humans alone possess “faithful approval”—trust in God. The imago Dei, she writes, is “ineffective if the listener does not have any faith.”[10] Yet without this human trait of “faithful approval,” there is no grounding for the responsibility she envisions. The shortcoming of Foerst’s account suggests that a theological response that seeks to engage constructively with AI must affirm human uniqueness without falling into oppositional exclusivity.
A “middle way” is possible, based on the anthropology I have presented in this book. I have contended that the imago Dei, the basis of theōsis or divine-human communion, is ontological vulnerability—the capacity for affect, enabling change—that we share with nonhumans. In our animality, we may indeed be viewed as biological “machines,” subject to biological and chemical mechanisms that shape much of our behavior. This could, in turn, align us functionally with humanoid robots that achieve sufficient complexity. Yet while humans may not differ substantially from nonhumans or (theoretically) from highly complex robots, we exist in a unique relationship with our creator as revealed in the person of Jesus Christ, according to the scriptures and the tradition of the Church. Within this relationship, we receive a calling to embody the ultimate teleological configuration of our animality in Absolute Vulnerability: the unconditioned receptive openness to the other that underlies God’s loving being.
This assertion does not deny the possibility that nonhumans may have unique and direct relationships with their creator; nor does it even preclude the possibility that humanoid robots might one day develop their own theological conception of creature–creator relationship.[11] Yet humans may appropriately claim a unique calling to actualize the telos of our animality in God’s Absolute Vulnerability, in the likeness of the incarnate Son of God. Our agency in this process is kenosis—the yielding of the false claim to independence. This is not a trait we possess over and against nonhumans; it is made possible because we are created in the image of the Son and Word of God, the prototypical human being.
If humans have received this unique calling to incarnate God’s Absolute Vulnerability in the likeness of Christ, while exercising our vocation out of a vulnerable nature shared with nonhumans and theoretically replicable in robots, the resulting anthropology is neither exclusive nor reductive. Human uniqueness lies in our vocation: the ascetic relinquishing of false independence and dominance, and the kenotic effort to make space in which all beings may flourish, thus revealing God’s Absolute Vulnerability in relation to them as creator. This stance toward God’s nonhuman creatures in turn provides an ethical framework for human stewardship of AI artefacts.[12] Just as we refuse to define humans in ways that set us apart from nonhumans, so we refuse to define AI solely by what it can or cannot do compared to humans. Rather, we affirm AI’s capacities and engage it in service of our vocation of ascetic, kenotic global stewardship. This includes ensuring that AI is not misused for exploitation and domination but directed toward love and flourishing.[13]
At the same time, our unique vocation in Christ prevents us from abdicating our responsibilities to AI—for example, by substituting embodied human relationships with humanoid imitations for self-gratification or care of others. Even if AI were to acquire some form of sentience, its identity would remain rooted in a mechanically humanoid, rather than biologically human, experience of embodiment. Thus, it is always inappropriate for humans to delegate their unique vocational responsibilities to AI, except under conditions of active ethical oversight and ultimate human accountability.
[1]. Simon Parkin, “Science Fiction No More? Channel 4’s Humans and Our Rogue AI Obsessions,” Television & Radio, The Guardian, June 14, 2015, https://www.theguardian.com/tv-and-radio/2015/jun/14/science-fiction-no-more-humans-tv-artificial-intelligence.
[2]. Elise Bohan, Future Superhuman: Our Transhuman Lives in a Make-or-Break Century (NewSouth, 2022).
[3]. Marius Dorobantu, “Artificial Intelligence as a Testing Ground for Key Theological Questions,” Zygon: Journal of Religion and Science 57, no. 4 (2022), 989.
[4]. Ibid., 990.
[5]. Dorobantu, “Artificial Intelligence,” 991-993.
[6]. Ibid.
[7]. Anne Foerst, “Cog, a Humanoid Robot, and the Question of the Image of God,” Zygon: Journal of Religion and Science 33, no. 1 (1998), 104.
[8]. Ibid., 105-106. In this regard, Foerst’s argument is very similar to that of Haslam.
[9]. Ibid., 108.
[10]. Ibid., 106.
[11]. See Dorobantu, “Artificial Intelligence and Religion,” 987-989.
[12]. For a discussion of the framework for an ethical discussion of the use of AI, see Ephraim Radner, “Artificial Intelligence: A Theological Perspective,” Toronto Journal of Theology 36, no. 1 (2020): 81–83.
[13]. Ibid.
