Using Will-Extension For Will-Transformation
Many who have lived with me in close community know that I struggle with defensiveness. It’s a particularly persistent flaw, because of a built-in catch-22: being told I’m being defensive makes me more defensive. I know I’m not the only person like this. In the moment, defensive people always feel justified. The arguments we make seem perfectly reasonable to us, even when they’re not. By the time we realize what happened, the conversation is already damaged. We might have known this about ourselves for years. Knowing hasn’t been enough to change it.
Recently I’ve been trying something different, to work on my particular version of this vice. I sat down with Claude and described the pattern in detail: the specific situations that trigger me, the way my defensiveness shows up, the rationalizations I reach for. I asked it to research best practices for working on this kind of reactivity and write me a report tailored to my situation. Then, after it had researched and drafted the report, I asked it to build an interactive exercise (in the form of an HTML file I could open in my browser) with tips and, most importantly, practice scenarios where I have to choose the best response from a set of options.
(Quick side note: this post is about AI and human will. It’s part two of two, so if you didn’t read my other post from earlier this week, please stop here, and go and read that one first!)

Because the scenarios are built around my actual patterns and not generic advice about “active listening” or “staying calm,” and I have to actively choose the non-defensive response, I notice the feeling of something shifting inside me. I’d read a scenario and recognize it immediately. I’d see the defensive response and feel the pull toward it. And then I’d read the non-defensive one, and push myself toward selecting it. It’s a small thing, but the repetition matters. Time will tell, but I hope to find myself reaching for those non-defensive responses more often in real conversations in the future, not because I’ve been lectured into it but because I’ve practiced the alternative enough times that it’s becoming available to me in the moment.
I want to be careful here. This is not a replacement for therapy. My therapist does things no AI can do. But there’s one narrow thing AI does better than any human: it can generate unlimited practice reps tailored to your specific situation, and you can run through them as many times as you want.
But here’s the part that interests me theologically. In my last post, I argued that AI extends your will but doesn’t transform it. I still believe that. Claude didn’t make me want to be less defensive. I already wanted that. What it did was give me a way to practice the change I already desired, at a level of specificity and repetition that wasn’t available to me before.
This reminds me of how Augustine talks about prayer. For Augustine, prayer isn’t about informing God of what you need. God already knows. Prayer is the discipline of reshaping your own desires, not God’s. Essentially, prayer is a way of extending our will to transform our will. And that’s very similar to what I see going on here with AI-generated practice scenarios.
I’m not saying that this example of AI use is prayer. But I notice a structural similarity. When I open that HTML file on my desktop and work through the scenarios, I am engaged in an intentional, repeated act of willing myself toward a version of myself I can’t yet fully choose when my heart rate is up. I’m not being transformed from the outside. I’m using a tool to practice the transformation I’ve already consented to from the inside. The AI is extending my will, yes. But in this case, the will it’s extending is my will to be transformed.
That opens up some interesting possibilities for formation. Imagine a spiritual director who asks her directees to pay attention, between sessions, to the moments when they feel most pulled away from the person they want to be. What if she could then say: take that pattern, describe it to an AI, and have it build you a set of practice scenarios around exactly that place of struggle? Not as a substitute for the next session — nothing replaces the director’s presence, her silences, her questions — but as homework. The directee practices choosing differently in a low-stakes space so that when the real moment comes, the better response is closer to the surface. It’s not unlike what the Desert Fathers did with their repeated prayers and examinations of conscience: creating a discipline of repetition so that virtue becomes reflex. The AI doesn’t do the spiritual work. But it builds the practice field where the work can happen between the sessions where it’s named. The tool is still just a tool. But when it’s pointed at the right kind of task by someone who already desires growth, it becomes something more interesting than a content generator. It becomes a rehearsal space for the person you’re trying to become.
When I practice in this way, I feel something shifting inside of me. I hope it’s a lasting shift, but only time will tell. The practice is doing what practices do: it has made a different response available where only the old one used to live. It’s not therapy and it’s not prayer, but it’s not nothing. The question isn’t whether AI can transform us toward virtue. It can’t. The question is what happens when we use it as a tool to practice the transformation we’ve already said yes to.

