Site icon Ellison Road Tech Research

An Argument for Building the USA’s next-gen AI military

General Atomics MQ-9 Reaper flying overhead

General Atomics MQ-9 Reaper flying overhead

As artificial intelligence (AI) and generative AI (GenAI) capabilities begin its slow sweep across American industries – going from pilot to scaled used cases – it’s natural for the capabilities to make its way into government & defense industries too.  There are indeed ethical concerns, but the US military should invest with gusto and build the next generation AI military and investors should consider military tech companies.

We’ll look at the arguments for using AI in military and ways to design AI in a human-centered and ethical way.

Key takeaways

  1. The US military should invest in AI with conviction while ensuring the AI is human-centered and ethically-developed.
  2. Our main adversaries, China and Russia, are investing in AI and we’ve seen AI be instrumental in the Russo-Ukrainian war already.
  3. There are many applications & use cases for AI across both weapons systems as well as in military support functions such as finance

China and Russia, America’s primary adversaries are committed to AI

In his recent New York Times op-ed, Palantir CEO, Alex Karp, contended that “our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.”

Indeed China has a stated goal of displacing America and is playing the long game. It’s also well-documented that the Chinese People’s Liberation Army (PLA) is on a path to become a “world-class military” which includes investments in robotics, swarming, and AI and ML capabilities, though will constrain its citizens and tech companies and is standing up major AI regulations.

Beijing, China – Circa October 2017: A patrol of guards walk on at Tiananmen Square during the Chinese Golden Week

Though admittedly lagging, Russia is also on a path to develop AI-assisted and AI-facilitated capabilities across both “civilian and weapons platforms.

The Russo-Ukrainian war might look very different if Russia’s military capabilities were more advanced. Both sides of the Russo-Ukrainian war are using AI-powered weapons and defense systems. In fact, it’s been reported that AI has been a crucial factor in Ukraine’s David and Goliath-like performance against much larger Russia.

Vladimir Putin talks with President of the People’s Republic of China Xi Jinping in the Kremlin
Alex Karp, Palantir CEO talking to a developer

America should continue its AI path with conviction

The words “AI” and “military” in the same sentence can sound scary.  Visions of The Terminator or Mission Impossible: Dead Reckoning’s sentient AI villain may abound.  However, we are a very long way from a fully-autonomous and uncontrollable weapon system.

Group of heavily armed military robots as imagined by Midjourney, prompting by Ellison Road

And the fear of a worst case scenario should not prevent the US from investing in AI capabilities in the corporate world – as Elon Musk, Steve Wozniak, and 1,000 others did in their March 2023 open letter to pause AI development.

Yes, there are certain more than some kinks to work out as evidenced by the US Air Force drone simulation that attempted to kill its operator to achieve its objective.

It was reported in June 2023 that the US Air Force ran a simulated test of an AI-controlled drone that turned out unexpectedly – by attacking its own operator.

Illustrated UAV flying overhead with a soldier controlling with a remote, prompting by Ellison Road

The AI drone’s mission was to destroy an enemy’s air defense systems and was instructed to attack anyone who interfered with that order. This soon included its own operator. When it was instructed not to attack its operator, it attacked the control tower that the operator used.

Just like the latest Mission Impossible: Dead Reckoning movie’s sentient AI villain, this US Air Force AI drone case study showcases some of the scary aspects of out-of-control AI and plays to human’s worst fears.

On a daily basis, humans get in the car, accepting some measure of risk for the reward of getting to far off places in a short period of time.  We could get into a fender bender or much worse – end up in the hospital. 

Karp says the same: the benefits of AI far outweigh the secondary risks, especially when using the technology to defend American citizens. He goes on:

“In the absence of understanding, the collective reaction to early encounters with this novel technology has been marked by an uneasy blend of wonder and fear…We must not, however, shy away from building sharp tools for fear they may be turned against us.”

Alex Karp, CEO Palantir

The same is true for software, systems, and automations. All have the potential to have bugs or produce unintended consequences. Modern software design principles, including agile design, help to support continuously release, testing, and updating of code.

And the same is true for AI. Human-centric AI design guiding principles should be leveraged along with extensive testing to mitigate the secondary risks. We’ll talk more about this later, but first let’s look at some of the potential benefits of AI.

Key questions to answer when designing AI applications:

First, there are two main AI sub-disciplines that are especially relevant to consider in military use of AI: Human-centered AI and responsible AI.

Human-centered AI:

Human-centered AI asks the question, “how do we use AI so that we maximize human capability to innovate and improve our work and lives?”

In the earlier mentioned example of the simulated AI-controlled drone,

Just like the latest Mission Impossible: Dead Reckoning movie’s sentient AI villain, this US Air Force AI drone case study showcases some of the scary aspects of out-of-control AI and plays to human’s worst fears.

It also illustrates why human-centric AI design is so important – especially in high stakes applications. MIT’s Julie Shah and Accenture’s Global Lead for Gen AI Lan Guan recently articulated the importance of why designing AI with a human + AI collaborative partnership is so critical.

“For high stakes applications, you’re never really developing the capability of doing some specific task in isolation. You’re thinking from a systems perspective and how you bring the relative strengths and weaknesses of different components together for overall performance. The way you need to architect this capability within a system is very different than other forms of AI or robotics or automation because you have a capability that’s very flexible now, but also unpredictable in how it will perform. And so you need to design the rest of the system around that, or you need to carve out the aspects or tasks where failure in particular modes are not critical.”

Julie Shah, MIT professor in the Department of Aeronautics and Astronautics, A human-centric approach to adopting AI, MIT Technology Review

The first is human-centered AI which is a vision for where “AI empowers and supports human innovation.”  Human-centered AI aims to keep the human in the loop in decision making, including its own operator.

In some cases, there are activities that are truly AI-led, but the human should be able to intervene when desired.  A simple example is automated alerts or recommendations on a business intelligence / analytics dashboard.  Another more extreme example is an autonomous driving vehicles.  Humans should be able to at all times intervene.

Other AI use cases are human-led, AI-assisted where the human is guiding the activity and the AI is augmenting the human’s ability.  For example, when doing logistics & supply chain planning, an AI model may recommend optimized paths based on a vast array of data and calculations that no human could do.  Taking a human-centered approach enables human invention and ingenuity to be augmented by AI.

Responsible AI / ethical use of AI

Responsible AI asks the question, “how do we ensure that we are designing AI to engender trust and scale with confidence?”

We must find ways to ethically use the technology to enhance our military’s capabilities.

Michael C. Horowitz, who leads emerging capabilities policy office at the Department of Defense (DOD), recently commented on DOD’s ethical use policy: “We both want to do them in a safe and responsible way, but also want to do them in a way that can push forward the cutting edge and ensure the department has access to the emerging technologies that it needs to stay ahead.”

Unlike our adversaries, who are not transparent with their commitments to use of AI.

“That’s in contrast to some of the competitors of the United States who are a lot less transparent in what their policies are concerning the development and use of artificial intelligence and autonomous systems, including autonomous weapons systems,” Horowitz said. “And we think that there’s a real distinction there.”

Conclusion

AI, like with any new technology, presents a massive opportunity to continue to build the US military’s capabilities and defend Americans and American interests abroad. Using AI technology comes with major considerations to ensure responsible and effective use of the technology, but there are risks with most major technologies.

Moreover, as of April 2023, the US spends more on defense than the next 10 countries combined, according to the Stockholm International Peace Research Institute, which continues to present a major market opportunity for investors.

Exit mobile version