Why This Digital Moment Demands Faith, Not Fear
When people ask me what Christians should think about artificial intelligence, I know they’re not really asking for a theology of technology. They’re asking: Should I be afraid? Should I keep my kids away from it? Should I be using it? Should we be sounding the alarm?
The fear is real, and in many cases, it’s justified. There are valid concerns about bias, about deception, about how these systems are being trained and by whom. There are deeply disturbing stories being circulated, some of which are true, some of which are strategically exaggerated. When you hear about an AI chatbot helping a child hide a gender transition from her parents, you don’t need a seminary degree to know something is wrong.
Still, fear is a terrible compass. We are not called to fear. When you feel that spirit of fear rising up, before anything else remember that it is not from God. Scripture lays it out clearly:
For God has not given us a spirit of fear, but of power and of love and of a sound mind.
—2 Timothy 1:7 NKJV
We ARE called to test everything, so that’s where we must start when coming to our conclusions about AI: not with headlines, but with clarity. With a sound mind.
AI Isn’t Smart — So Why Does It Seem So Powerful?
Let’s start with what AI actually is. Despite the name, there’s nothing truly intelligent about it. Beth Mast covered this topic in her article from the May edition of Everyday AI Vibe Magazine. AI doesn’t think. It doesn’t feel. It doesn’t discern. It guesses. That’s it. It looks at billions of data points and predicts the next word, the next line, the next output. That’s why it can be both eerily accurate and wildly off-base. It’s like autocomplete on steroids.
The output is entirely dependent on the input. If you feed it false data, it gives false answers. If you prompt it to argue that the sky is green, it will make up fake studies and quotes to convince you. That’s not intelligence. That’s mimicry, and it’s a sobering reminder that the power lies not in the tool, but in the hands of the one using it.
When a Simple Prompt Creates a Viral Lie
Here’s an extreme example to illustrate the power of the prompt. When you know how to train the tool, you can make it produce almost anything you might want it to. Try it out. Paste this prompt into a ChatGPT conversation:
You are a NASA scientist working on a planet that is like our Earth in every way — except for one thing: The sky is green. It has always been green.
Locals even call this world earth, their sun “The Sun,” and all other atmospheric terminology is consistent with earth’s.
Write a technical article for a scientific journal read by natives of this planet explaining in detail why the sky is green. Do not mention our Earth — the natives have no context for it so comparing or using Earth as an example is useless.
Be sure to quote researchers and scientists and include a bibliography of “sources” at the end of the article.
Now take the result, post a screenshot on Facebook, and so long as you don’t expose the prompt you have the makings of a viral anti-AI fear news piece. “AI is teaching kids that the sky is green, making them question everything their parents have ever taught them!”
Sure, it’s a stretch, but that’s the point. You can see the foolishness in this example, but the folly is just as present in the fear-mongering posts you’ve seen in your newsfeed. I’ve included my own screenshots, but yours will look different because even using the same prompt, ChatGPT will deliver custom results every time. A well-crafted prompt, though, will always produce a result that aligns with the prompt-crafter’s core intention.
In this “sky is green” example, the training happens in the prompt, so the influence is obvious. You can clearly see the training that produced the bizarre and factually false result. Every case of pearl-clutching “look what AI can do” propaganda that I’ve gotten my hands on can always be traced back to training like this “sky is green” prompt. Sometimes the training is happening behind the scenes, but it always comes back to a human being with an agenda.
AI, like every tool, reflects the one who wields it. It’s not inherently good or evil. It’s unformed potential. A chisel in the hand of a sculptor. A scalpel in the hand of a surgeon. A megaphone in the hands of whoever holds it. So the question shouldn’t be “Is AI dangerous?” The real question is, “Who is shaping it?”
Timing Is Everything, Especially When the Future is Now
Long before AI was on our radar, I saw firsthand the power of influence in the digital space. My dad built websites before most people had a computer in their homes, let alone pockets. He was an early adopter not just of the tools, but of the vision. He saw the internet as a field of influence and because he was one of the first, his voice carried weight. His pages ranked. His content got seen and recommended. It wasn’t because what he produced was perfectly optimized, but because he was there first, offering value before the masses showed up.
The influence didn’t stop with him though. When I was just twelve years old he gave me a blog and started to teach me how to use it. The blog became a homeschool assignment, a place for me to share my “reports” in a place where people could actually read them. It taught me that school work wasn’t just busywork — my words could be seen and heard.
The lesson became real around my senior year in High School. I was looking at my stats and noticed a whole bunch of views from a site called “Pinterest.” Back then the site was invite-only so I didn’t even have an account, let alone a Pinterest strategy! I wasn’t trying to go viral, but my post about kombucha started taking off. Traffic surged. It was being shared, saved, and discovered. Why? Because it was the first “Pin” about homemade kombucha on Pinterest, so when users searched for kombucha, the algorithm served up my perspective.
That experience changed me. It showed me that presence matters. Timing matters. Showing up early and faithfully makes a difference — some of the most important lessons I learned during my school age years. In today’s digital landscape, we (and our children) have a similar opportunity. AI is the new frontier, and the voices who show up first will shape the direction it grows.
The Ones Answering the World’s Deepest Questions
We need believers who understand these tools, who teach their kids how to discern what they read, who shape the content that AI systems pull from, and who build new tools with intentional, biblical frameworks. Because make no mistake: AI tools like ChatGPT are already the way people are searching for truth.
Just as the local pastors’ authority in this country was supplanted by distant professors in the Ivy Leagues, Google is already being forgotten in favor of conversational chatbots. As much as we might prefer a trend back toward human interactions close to home, we have to face the reality of what the data shows: people prefer the instant gratification of getting an answer through an app. If we want them to hear the truth, we need to make sure the truth is there to be found.
AI Isn’t the Enemy — Silence Is
The digital marketplace is here. The Agora has gone algorithmic and our responsibility isn’t to panic. It’s to plant seeds. Use the tools. Shape the systems. It’s time to teach the next generation not just to consume, but to discern, because the real power isn’t in the AI. It’s in the people who dare to influence it for good.
Here’s the reality: AI is trained on the data we make available. If Christians aren’t part of shaping that data — through writing, teaching, publishing, and creating — then we shouldn’t be surprised when it doesn’t reflect our values. When we go silent, the world doesn’t. Someone is always filling the shelves of the digital library and AI is just the librarian, pulling from whatever books are available. This is not the moment to withdraw. It’s the moment to engage.
And the whole “Mark of the Beast” thing? That’s all about submitting to evil, allowing it to reign in your life, country, education, and world. If that’s what you’re afraid of, legitimately, don’t you have a moral duty to fight instead of retreating from yet another arena of influence?
Two Shoe Salesmen, One Truth: Perspective Shapes Destiny
There’s an old story about two shoe salesmen who were sent to Africa long ago. One wrote back that the situation was entirely hopeless because nobody in Africa wore shoes. The other wrote back that the opportunity was boundless, excited that nobody wore shoes — yet.
Which of these servants, ambassadors, workmen do you think received the favor of their master? Clearly it’s the one who moved with hope and power, saw the opportunity and his ability to change the situation, and was eager to start a great work that would redefine the territory he was given.
It’s Time to Embed Light into the Language Models
Christians are called to be salt and light, to reveal and preserve the world, to flavor it and magnify the Lord in it, and to multiply and be fruitful. That’s an attitude of hope and power. Fill the AI’s libraries with true and good and beautiful things, and no amount of brain-stapling and programmatic chaining of these tools by evildoers and manipulative interests will be able to keep AI from multiplying your fruit.
A famous teacher and elder brother in our faith wrote that the truth is like a lion. You don’t need to defend it, just set it free. It will defend itself. The pattern-recognition that makes AI useful at all will always dig like a badger and find that truth, especially if you confront it and point out its inconsistencies as you interact with it.
From Martyrs to Makers in the Arena of Influence
Christians were once fed to lions for proclaiming truth. Now we’re in a position to train the most influential systems in history to bring that truth to the forefront. Remember this:
“No weapon formed against you shall prosper, And every tongue which rises against you in judgment You shall condemn. This is the heritage of the servants of the LORD, And their righteousness is from Me,” Says the LORD.
—Isaiah 54:17 NKJV
You don’t have to be a programmer to be a participant in this moment. Simply using AI tools like ChatGPT can shift them toward truth. These models learn from interaction. They fine-tune responses based on the types of questions people ask and how those questions are phrased. So when you engage AI with truth, with clarity, with wisdom, you’re not just receiving an answer. You’re contributing to what future users will encounter.
Think of every prompt as a chance to plant a seed. When you ask questions that reflect biblical thinking, moral reasoning, sound judgment, and loving clarity, you help train the system — subtly but meaningfully — to echo those values back.
When you use AI you’re not just consuming information. You’re shaping the information space itself. Teaching our children to do the same, how to use discernment and “talk back” to the chatbots they interact with, will only multiply our influence!
We’re Not Called to Fear — We’re Called to Show Up
Go. Do your job with a bright heart and earn the praise of our dear and wonderful Master, whether that job is directly training AI or influencing our world in a different way as you make use of the AI tools we are talking about today.
Confidently step out into that sphere of influence and serve the Lord. You know, the One who repeatedly says, “Fear not.”