Will AI Weapon Systems Be Programmed with “Judeo-Christian” Values?

Who would you trust more in a high-stress, high-stakes military excursion? A trained human soldier with the limitations of hunger, thirst, lack of sleep, and emotion—or a soulless, autonomous artificial intelligence system acting according to its programming? That’s a question militaries around the world—including the US military—are already facing as AI quickly advances into every industry. But the big question is this: who gets to do the programming?

So what exactly is autonomous AI when it comes to the military? Well, according to a Christian Post article, the Congressional Research Service states these are:

a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.

In other words, it’s a lethal system that gets to make its “own” decisions outside of human control, all based on its fallible human programming. Now, “the U.S. military does not have weapons completely controlled by artificial intelligence in its inventory,” but it’s likely such systems are coming soon—elsewhere if not here. So that gets us back to our question—who gets to do the programming?

No AI code is worldview free. As we’ve seen with AI like ChatGPT, the worldview of those doing the programming influences the end result. It’s not neutral! (Nothing is!) And the same rules apply to lethal weapon systems. Whoever is developing the algorithm is incorporating their worldview—their beliefs—into the programming which then determines how the system behaves and what decisions it makes. So worldview matters—hugely!

But Lt. Gen. Richard G. Moore Jr., deputy chief of staff for plans and programs in the US Air Force, says that these US systems will “likely be programmed with a ‘Judeo-Christian’ value system.” He states,

What will the adversary do? It depends [on] who plays by the rules of warfare and who doesn’t. . . . There are societies that have a very different foundation [than] ours. . . . Our society is a Judeo-Christian society, and we have a moral compass. Not everybody does. And there are those that are willing to go for the ends, regardless of what means have to be employed. And we’ll have to be ready for that. (emphasis added)

Yes, he argues that our AI will be programmed with “Judeo-Christian values” because that’s what American society is founded on; that’s our “moral compass.” But I would argue that, while that was true for America in generations past, America of 2023 is certainly not a “Judeo-Christian society”—we don’t even have a “moral compass” as a society anymore! Really, America is now a post-Christian nation, absolutely unmoored from its once-Christianized roots. And our “moral compass” can be summarized this way: “Everyone does what’s right in their own eyes” (paraphrase of Judges 21:25). (And, he’s right, what about other countries that don’t even have a semblance of a Judeo-Christian ethic for their military decision making?)

We need to pray for our military and the massive decisions that will be coming as these types of weapons begin to be coded and produced.

So what does this mean for lethal AI systems? Well, based on what I see in the culture (and the secular worldview pervading the military), I think it’s very likely they won’t be programming systems with a “Judeo-Christian” worldview—at least not one you or I as Bible-believing Christians would recognize!

We need to pray for our military and the massive decisions that will be coming as these types of weapons begin to be coded and produced. And we need to remember, no matter what anyone tells us, there is no neutrality in anything. You are either for Christ or against him, building on the rock or on the sand, gathering or scattering—there’s no other option!

Thanks for stopping by and thanks for praying,

This item was written with the assistance of AiG’s research team.






Leave a Reply

%d bloggers like this: