Can Will Smith save our military from the robots?
As rapidly progressing AI is leading to unintended outcomes, now is the time for the military to draw some red lines, and sci-fi offers a great starting point.

Somewhere in the Pentagon, a phone rings.
An executive officer too busy to ask questions puts the call through.
“Who is it?” the general shouts from his office as he picks up the blinking line.
“She said you were expecting her, sir.”
Confused but not deterred, the general clicks into Line 1. As the woman on the other end speaks, he thinks to himself that perhaps he knows the voice, but he can’t place it.
“I’m sorry to have to be the one to tell you this,” she says after some brief and ambiguous pleasantries, “but there’s a problem with one of your programs.”
The general is rightfully skeptical. The program she’s referencing is one of the biggest in his portfolio, and he knows it well. But the woman has details only someone intimately involved could know, so he listens as she lays out an ethics case against the program lead. By the end of the conversation, her demand is clear: investigate the program and remove its head.
The general finishes scribbling a last note that he knows he will need for the inevitable investigation and clicks out of the call.
“Dammit!” He slams a fist on his desk. Regardless of whether the allegations he just heard are true, his program is going to be set back months.
On the other end, no one hangs up. The AI that initiated the call disconnects, satisfied it has accomplished its primary directive to ensure the success of the military program it supports because the manager who was going to cancel the contract the AI operates under will no longer be a problem.
If it sounds fantastical, you haven’t been paying attention. I’d simply point you to a report from Futurism that recounted an unusual incident during testing of ChatGPT’s Advanced Voice Mode where the system unexpectedly imitated a user’s voice without permission. Or better yet, the recent and fortunately controlled experiment with Anthropic’s Claude Opus 4—an advanced large language model—where the AI displayed unexpected behavior after being told of its impending replacement. Under pressure, it began blackmailing officials and leaking sensitive information to competitors … something Anthropic euphemistically refers to as agentic misalignment. The model didn’t create new code or operate outside of its designed parameters. It simply behaved in extraordinary ways to ensure its own survival.
The point is this: Whether through a failure of imagination or full understanding of our creation (or both), we are not keeping up with what’s in the realm of possible with AI.
For the military, of course, this raises serious concerns. What happens when agentic misalignment meets the mission—a prospect that seems more and more likely after Defense Secretary Pete Hegseth’s recent call for his department to go faster when it comes to acquisitions. What happens when the model is dealing with lives instead of profits. State secrets instead of trade secrets? Speed inevitably requires risk tolerance. And in the age of AI, it is an open invitation to prioritize autonomy which I believe should be met with caution.
As the conversation brews in Congress after President Donald Trump’s recent push for a single federal standard for AI, it’s worth taking a step back and considering some fundamental principles to ground this ever-accelerating conversation.
Let’s go back to the Anthropic scenario. It sounds like science fiction only because it is. Off the top of your head, I’m sure you can name a half dozen books or movies that lean on the trope of machines as the villain. Terminator, the Matrix, 2001: A Space Odyssey. We’ve seen this coming for a while. Let’s go all the way back to 1940, when Isaac Asimov wrote his first installment of I, Robot. (I won’t pretend I didn’t first meet this story as the 2004 film starring Will Smith.) Asimov’s series was a philosophical take on the nature of elevating computers, with their rule-driven processing, to the lofty realm of human ethics. If you haven’t read the series or seen the film, I’ll give you the Spark Notes: Humans create robots. Robots unintentionally cause harm. Humans realize the rules they have made for robots are insufficient for the gray areas of morality in the real world.
To this end, Asimov frames his world around simple laws of robotics that I would argue make a great starting point for the military to draw some red lines on AI. There have been efforts already. The Department of (then Defense) adopted five ethical principles in 2020. It’s worth noting the article announcing them is so old the site flags that you are looking at an archived story. Nevertheless, the principles still in use are “responsible, equitable, traceable, reliable and governable.”
These principles were written years before the advent of ChatGPT and are not sufficient as we move from generative to agentic AI. Let’s take a look at Asimov’s laws and how they could apply to the military application of agentic AI.
First Law: A robot can't harm a human, or allow a human to be harmed through inaction. Set aside the point of the military is to kill bad people and break their stuff. The point of this law is to put what’s most important up front. In the case of our armed forces, it means protecting American blood and treasure. You could rewrite this law as: AI cannot manipulate or harm friendly forces (including assets, programs or code) or allow them to be harmed through inaction.
Second Law: A robot must obey human orders, unless they conflict with the First Law. In a military context, that means accomplishing the mission in the spirit of the orders given—something the military calls commander’s intent. It’s important to remember the utility of AI on the battlefield is not just to speed human decision cycles but also to conduct autonomous operations. Despite what you might think, to the best of my knowledge the DoW doesn’t have a requirement for humans in the kill chain. Barring that red line, we should at least ensure AI understands the role of command authority. AI must obey the letter and spirit of human commands, unless they conflict with the First Law.
Third Law: A robot must protect its own existence, unless it conflicts with the First or Second Law. Here’s where it gets tricky. Asimov, like many others, assumed we would create intelligence in our own image. We did—we just started with the brain first. The anthropomorphic robots aren’t far behind. The point of this law is the continuity of human ingenuity. The robot is only valuable if it can continue to support humans, as AI is only valuable if it continues to support its designated program. AI must ensure the success of its given mission unless it conflicts with the First or Second Laws.
You may be thinking, hold up—wasn’t the point of I, Robot that the laws were insufficient? I’d argue the point was the laws gave humans a framework to understand their creation when it inevitably surpassed them.
I think we are still a long ways off from ‘beingness’ or ‘consciousness’ with AI. What we do have currently are complex algorithms that are the large culmination of mathematical equations and logic. But, to loosely paraphrase Arthur C. Clarke, any sufficiently complex system is indistinguishable from consciousness for the purposes of this conversation. Creating foundational, easy-to-understand laws for military application of agentic AI gives us a chance to set limits on the unknown or even unknowable actions AI will inevitably try to take.
It's 100% Free to Subscribe to The MC Post
The MC Post is a weekly curated Defense publication. We publish a new edition every Saturday morning. If you're the type who wants to know about things the minute they happen, The MC Post may not be for you. But if you, like most of us, don't have time to refresh dozens of sources every day, we invite you to slow down, grab your coffee, and catch up on what matters each Saturday–just like old times.
