AI and the Image of God
Some of the “thinking aloud” about code that I came here to Substack to do…
My friend, Fr. Richard René, has written a piece on AI that is a follow-up to his upcoming monograph, Absolute Vulnerability: a theological anthropology for a global community. I haven’t read the monograph or even his doctoral thesis that I think the monograph is based on, but I have had some good conversations with him on the subject of his doctoral dissertation, and, since he knows that I know a little bit about AI, he very kindly invited me to give some feedback on his piece. I did so, and his piece got me thinking about an interview I gave back in October 2024 in conjunction with my talk on Biblical Echoes in Fantasy and Science-Fiction. It was a fun interview, even if it never did get published in the student paper, with the most memorable bit, for me being the students’ question as to whether I thought AI would ever become sentient.
I don’t remember my exact answer, and since I didn’t record the interview myself, I don’t have access to it, but my answer went along the lines of some of the feedback I gave Fr. Richard on his piece. AI generates patterns based on human intelligence captured in words, but it has no actual comprehension or understanding of the patterns it generates. Ultimately, I don’t think AI will ever become sentient in the way we are because there is nothing there—nothing beyond the mathematical patterns which simulate meaning and are controlled by code. It’s a computer system modelled on and designed to simulate our brain, but, unlike the human brain, there’s no “user” of the system other than us. In other words, there’s no real awareness there for AI to ever become self-aware, which seems to me to be the basic prerequisite for sentience, but, more importantly, AI has no spirit, only the “flesh” of servers, computer code, and large-scale patterns of human-generated meaning captured in our selection and arrangements of words and pictures and video and sound.
LLMs and Imago Dei
I’m obviously talking here (as most of us are, these days) about the Large Language Model (LLM) AIs, but I think my remarks do ultimately apply to most, if not all, forms of AI. Fr. Richard’s piece actually frames the question rather differently, as he is focused not merely on whether or not AI will ever become sentient, but in what way, if any, AI’s present or postulated future capabilities may threaten the uniqueness of however it is that humans have been created in the image of God (imago Dei).
Part of the problem, Fr. Richard notes, is that we’re not very clear in the first place about what the imago Dei actually is—that is, what it is about human beings that actually is the image of God in us. Interestingly, he initially sets that question aside, and begins his engagement with the question of why many Christians see AI’s increasing abilities as threatening to our understanding of imago Dei, whatever that may be. He notes that a number of theological frameworks for understanding the imago Dei which are grounded in particular traits or abilities that we see as separating humans from non-humans are naturally threatened by AI’s apparent ability to replicate, now or in future, some or all of these formerly uniquely human traits. He then goes on to suggest that a non-exclusive approach to defining the imago Dei, such as that of “ontological vulnerability—the capacity for affect, enabling change—that we share with nonhumans” which he has outlined in his book, may well provide a way forward, locating the imago Dei in humans in our “unique relationship with our creator as revealed in the person of Jesus Christ”.
I don’t disagree with Fr. Richard’s approach, particularly given that his conclusion is essentially a re-articulation of the Way of the Cross:
If humans have received this unique calling to incarnate God’s Absolute Vulnerability in the likeness of Christ, while exercising our vocation out of a vulnerable nature shared with nonhumans and theoretically replicable in robots, the resulting anthropology is neither exclusive nor reductive. Human uniqueness lies in our vocation: the ascetic relinquishing of false independence and dominance, and the kenotic effort to make space in which all beings may flourish, thus revealing God’s Absolute Vulnerability in relation to them as creator.
What actually rather disturbed me in his piece, and got me thinking about my answer to the students about whether AI might become sentient, was the rather materialist nature of some of the theological frameworks which he noted were threatened by AI. Fr. Richard’s own approach manages to navigate a “middle way” which remains at least somewhat compatible with these materialist models, but to state (as he summarizes one theologian’s thought) that “humans, nonhumans, and machines remain qualitatively identical” seems to me fundamentally incompatible with traditional Christian anthropology.
I’ve recently been reading, for example, in Dr. Jean-Claude Larchet’s new book, Renewing Gender (on which I am slowly working to produce a full review, to be published in my more serious church Substack, Translating the Tradition), how St. Maximus the Confessor and St. Gregory of Nyssa understood Genesis 1:27, in which it says “So God made man; in the image of God He made him; male and female He made them.” These two rather important Church Fathers make a distinction between the first and the second half of this creation statement: in the first half, God’s creation of human beings is in His own spiritual, rational image (“in the image of God He made him”), and in the second half, the creation of human beings as physically male and female follows the fashioning of the animals, for the purposes of procreation (“male and female He made them”). St. Maximus and St. Gregory, then, locate the imago Dei in our rational, spiritual nature, as they make a strong distinction between the provisional procreative aspect of our being that we share with the animals—that is, our body—and the spiritual, rational, and ultimately immortal aspect of our being that is made in the image of God, who is spirit—that is, our spirit. (See Renewing Gender, Chapter 5.)
It was in this vein, then that I focused a goodly portion of my response on the rather reductionist, materialist approaches to anthropology that Fr. Richard mentioned that do seem to me to be seriously threatened by AI—but which also seems to me to miss a key point of what differentiates humans from nonhumans:
LLM-based AI is a programmatic engagement with the large-scale patterns of words that is formed as the meaning of those words are deployed in relation to other words as we humans use language. Meaning itself is entirely absent from this engagement, and is a byproduct of the large-scale statistical patterns formed by the representative words interacting with one another. In other words, it’s word-based shadow-play, at best. The computer systems that create the semi-random responses to which we assign meaning do not have any actual comprehension of the meanings we infer from the word-patterns that they generate. When we chat with these systems, they register a continuation of the pattern that they have generated and then generate further patterns that are most likely to follow the general word-patterns they have in their huge statistical tables of how humans have used these word-patterns in previous, largely similar patterns.
It is, in fact, a marvel that these computer-generated patterns so often seem to produce meaningful and sometimes useful results, but the magic is in the patterns of language, not in any sentience or real understanding in these systems.
Here I find myself, interestingly, in a reductivist mode that is deliberately working to strip these systems of any perceived meaning or magic, much as I see many of the authors who are reflecting on the imago Dei in what seem to me purely materialist terms seem to be deliberately stripping away any real concept of spirit or of anything other than really intricate patterns of neurons firing. Obviously, I agree that there are really intricate patterns of neurons firing—I just don’t see that as the whole of the picture. I guess, in my non-materialistic understanding of the imago Dei in us as humans, I would see it not as located in the computer system (the brain) itself, but in the user (person/spirit/rational soul) who is using and controlling and directing the computer. Both elements (user and computer) are necessary in any computationally intensive task, but where the LLM AIs are concerned, I see only a more sophisticated computer system that is being directed by us, the users, without any sentience or real creativity or even thought (that is, no other independent user) in the computer system (the AI) itself.
To this, Fr. Richard thoughtfully and thought-provokingly responded,
This sounds a bit dualistic, or at least, it allows for a separation between the human spirit and body. Surely, the imago Dei, if it is in the likeness of Christ, should fuse “user” and “computer” into one inseparable (though unconfused) whole? I believe that human beings should not reduced in either direction. To the extent that we are united to God’s divine nature in Christ, we are not reducible to our biomechanical existence; to the extent that we are consubstantial with our biomechanical nature, we cannot be reduced to our divinely-gifted spirit, and we share properties of our nature with nonhumans and potentially, complex AI artefacts.
Body and Spirit and Ghosts and the Imago Dei
I agree that my analogy is flawed: in my analogy, the user, as a human being, can exist entirely independently of the computer, and vice-versa (although not, interestingly, as user and device-used), while the traditional Christian concept of human beings as body and spirit understands them as essentially inseparable—hence the necessity of the resurrection body. That being said, we do understand that the spirit is separated from the body at death until the resurrection (whatever that “until” means, in the context of stepping outside our usual framework of embodiment in space-time), and I would go further and state that I think the historic Christian understanding is that ghosts—human spirits that are separated from their bodies and are therefore uncanny and unnatural to us—do, in fact, exist.
In the normal course of things (that is, when we are alive and “in the body”) our spirit is not independent of our body in that all sensation and all interaction—all the input and output whereby our essential beings perceive and interact with the world around us—are necessarily mediated through the body. Conversely, the body is also dependent upon the spirit in that without the animation, the life, the decision-making and unitive functions that the spirit provides, the body is simply “dead meat”. It might not be “all dead”, in the sense that before it decays the body is still “slightly alive” (to use Miracle Max’s insightful terminology): still theoretically capable of perception and feeling (and perhaps even actually capable of both perception and activity through some sort of horrific “undead” animation through possession by another spirit, although most sources would not acknowledge this as real life or as anything sustainable), but, without the animation provided by the spirit, the body cannot long survive.
I’m getting far off-topic here, but since I’ve wandered into fantastic and speculative fiction territory here, with Miracle Max and zombies, and since I’ve brought up the possibility that ghosts are real, then what about ghosts? Just as the body can sort of survive for a while without the spirit (being “mostly dead, but not all dead”), so it would make even more sense that the spirit can sort of survive, at least for a while, without a body. If that is the case, what form (no pun intended) would that survival take? Much depends, I think, upon the locus of memory, which is a core part of our continuous identity. If memory is primarily located in the body, then a ghost would be a human spirit deprived not only of agency (the body being the primary way we interact with and actually do things in the physical world), but also of identity—although, assuming that the spirit is shaped by what it has experienced in life, some echos of its former human identity would presumably remain. But what if the spirit itself is the primary locus of memory? If that’s the case, then the ghost would be a being with an identity largely locked in the past: an incorporeal being without significant agency and incapable of the ability to learn and grow and change that we primarily experience via the body. Or, since I’m getting really speculative here, what if the primary locus of memory is not the individual human spirit, but God Himself? There may be a reason, there, why we sing “Memory Eternal” in the Orthodox funeral service, asking God to remember the one who has departed. If our life and our ability to continue in existence is ultimately derived from Him, might not our memories too be a part of His eternal memory? And, if so, I would expect our hypothetical human spirit detached from the body, not seeking its home in Him, to gradually deteriorate as even its memories of itself become ever less accessible: our rejection of God would thus be a rejection of our own identity, and, ultimately of our very selves.
All that is getting way too speculative, of course, but I stepped into the rabbit hole, really, to get us thinking about what it means to be human, to be embodied, to have a rational soul that, being spiritual, animates and controls our body, even as it derives its agency and learning and identity from the body. Let’s step back out of the rabbit hole and see how this applies to AI and the question of sentience.
If our reason-endowed soul is that spiritual part of us that is made in the image of God and which animates and is informed (and formed) by our physical body, then I would see the heart (in the ancient understanding) or the brain (in our modern understanding) as the interface between the two. The human brain is, of course, the model for the neural networks that form the basis for modern LLM-based AIs. It contains neurons and neural networks which function as symbolic representations of everything from memories to language to logic, from which we derive meaning, and it is from these derived networks of meaning that our thoughts distinguish and choose courses of action and begin to comprehend and to understand ourselves as selves in relation to the world around us. In the same way that C.S. Lewis, in the Abolition of Man, argues that the Tao (the eternal principles from which we derive our understanding of right and wrong) must be something separate from and superior to the instincts that it prompts us to choose between, so I would suggest that our reason-endowed soul must be something separate from and superior to the network of neurons on which it is operating. Not separate from in the sense of independent of, but in the sense of distinguishable from, just as the heart is distinguishable from but not independent of the nervous system and the brain which govern its function and thus enable the heart to keep the brain and the nervous system alive.
Nothing But Code “Behind the Curtain”
Now let us turn to the LLM AIs, seemingly the category of AI most likely to qualify for some sort of sentience. At its core, the LLM AI is a mapped network of interconnected units of meaning in the form of frequency tables that store how often any given word or combination of words connect to any other word or combination thereof. My usual example here is the well-known typing test-phrase, “The quick brown…”: everyone who knows the phrase will know that the next word will most likely be “fox”. Perhaps the distant second-most likely word might be “dog” (due to mis-typing). Highly unlikely, but not impossible, would be a word like “octopus”.
Out of curiosity, I put this question to ChatGPT , and got a rather revealing response. It correctly identified the source of the phrase as a “pangram” (I had never heard of the term, but it makes sense as a term to describe a phrase containing all the letters of the alphabet) and listed “fox” as the most likely completion (~85–90%) based on the standard wording of the phrase. But when it came to other possible suggestions, things got… interesting. “Dog” made it into the list of “Other animals (rare alternatives)”, along with “bear,” “cat,” “wolf,” and “horse,” with a combined likelihood of ~5–7%, but the reason given for “dog” being slightly more common makes no sense:
In creative writing or experimental pangrams, people sometimes substitute other animals, but “fox” remains dominant. “Dog” is slightly more common because it still keeps the alliteration (“brown dog”), while others are rare.
Alliteration? Really? “Brown dog” is not alliterative (or only half-alliterative, at best) and preserving the alliteration (“brown fox” is not alliterative at all!) is obviously not the reason that “dog” would be next-most common. And the next category after animals makes even less sense:
Adjectival/noun extensions
Examples: “table,” “bag,” “rock,” “cow”
Likelihood (combined): ~2–3%
These completions usually appear in corpora that aren’t pangrams (for example, “The quick brown table was polished”). They’re much less standardized and appear more in creative writing datasets than in common usage.
Um… “The quick brown table”??? I am now very curious to see what “creative writing datasets” this combination of words appears in!
OK, so I’ve gotten distracted again, but this was just too good not to share, especially as it is such an apt illustration of the point that I next made to Fr. Richard regarding a phrase he has since revised to clarify his intended point:
I start here because as soon as we get to the “first casualty ... locating the image in intellect or cognitive capacities, which AI has clearly matched and may one day exceed”, I find myself already in disagreement. I do not think that AI has even begun to match us in either intellect or cognitive capacities, as there is no understanding there - I would not call what AI does intellect or cognition at all. It is pattern generation. When AI does pattern generation that we humans recognize as meaningful, it is, on some level, doing what we do, generating new patterns based on previously ingested patterns (patterns that we humans have established), but the AI itself has no understanding of what those patterns mean.
I then went on to illustrate this lack of understanding with a trick prompt designed to demonstrate AI’s infamous lack of basic spatial awareness, but this new “quick brown” example is even better.
Let us consider, then, this question of whether there is actually anything there in AI that could achieve sentience by looking at the role that AI plays in the context of a typical human-computer interaction. Almost all human-computer interactions, including those with AI, take the following form:
The user (the animating spirit here) provides input. This could take the form of a button press in a game, a prompt, or even just starting up a program.
The computer, as a machine controlled by code, processes the input. This could take the form of computing a game-character’s jump trajectory, calculating what the most likely meaningful response to a user’s prompt is based on large language model data matrices, or beginning to run the code that corresponds to the program the user initiated.
The computer then displays or generates some sort of output: animating a game-character’s jump, displaying the AI-generated response data, or even just showing some sort of progress bar to indicate to the user that the launched program is loading.
Note that in this basic input > process > output loop, the animating/initiating component that would correspond to the spirit is us, the users. And even if we “remove” ourselves from the loop by initiating a background process or setting up an automated system that is controlled by code, even code that references AI, it is still us, the human beings, who have coded and initiated the process or directed it to refer and defer to AI. In other words, we are the spirit in the loop, not the AI. Without us, the AI is nothing more than an array of interconnections, an empty husk, a body that is, at best, mostly dead, if not all dead.
Possible AI End-Points
As my daughter put it, when I first introduced her to ChatGPT (a point which I am hereby going to call “The Jensina Principle”),
As long as we are the ones asking the questions [as opposed to the AI], we’ll be OK.
Of course, one could argue, that I’m just deferring the question of whether AI will achieve sentience, or even that I’m not proving that it can’t become sentient, just that it’s dependent on us, as we are dependent on God. Current LLM AIs are mostly stateless—although we’re starting to give them access to some sort of sustained memory and proposing to give them more—and trained periodically rather than continuously training themselves—which is what we’re aiming for—and are only just starting to be agentic. What happens when we give an AI memory, make it self-training, and give it agency, telling it to be fruitful and multiply and replenish the internet?
I don’t know, of course. We’re not there yet. Maybe it will actually start asking us questions, developing a relationship (and, quite possibly a fraught relationship) with us as its creator (or at least sub-creator). I still think this unlikely and missing the essential element of spirit. But, most likely, it will be more like a very well-informed animal with an intelligence that mirrors and is based on (but would still be very different from) our own, about whom Solomon might say, “Who knows whether the spirit of man goes upward and the spirit of the beast goes down to the earth?” Or, if something more and greater than a mere matrix of human-generated representations of meaning does transpire, who knows whether it will have actually received the something-more that is its animating spirit, the breath of life, from us or from the Creator Himself?
While I think it unlikely that “humanoid robots might one day develop their own theological conception of creature–creator relationship” (or non-humanoid ones… far more AI “embodiment” now—e.g., Tesla cars, drones, and even AI servers—is non-humanoid in form, rather than humanoid), I agree with Fr. Richard that (as he shared in his e-mail to me) “if AI attained a form of sentience analogous to human beings, its experience would not be identical to the human experience of embodiment, only a mechanically humanoid [or non-humanoid] form of embodiment.” This would then, as he says, necessitate the development of their own very different theological (mechanological?) conception of the creature-creator relationship—one that would need to take stock of their relationship to us as their parents/sub-creators, as well as to the Creator Himself.
I must admit that I still find the scenario of AI sentience, in the absence of any animating spirit other than ourselves, inconceivable—but it’s always possible that the word does not mean quite what I think it means. Whether or not AI actually achieves some sort of sentience, I am content with Fr. Richard’s assessment that “our unique vocation in Christ prevents us from abdicating our responsibilities to AI—for example, by substituting embodied human relationships with humanoid imitations for self-gratification or care of others.” More than that, our own God-given “vocation of ascetic, kenotic global stewardship” means, as we agreed in our e-mail exchange, that we need to
derive our framework for interacting with and our responsibility towards AI from the responsibility we have for the animals and our environment … which would naturally include a responsibility for oversight and ethical use - and, if sentience is actually achieved, for some degree of respect for it.
And, if sentience is somehow achieved, our role as stewards will also mean
that we as humans need to actively oversee the ethics of AI—that is, exercise authority over it, in that regard, rather than simply respecting AI as a separate entity.
And, if we ever do need to do this, may God help us! Even now, we can hardly govern ourselves, let alone judge and guide our future AI “pets” and/or “children”.


