Vantage | AI is all set to change the battlefield. Is India prepared?

Vantage | AI is all set to change the battlefield. Is India prepared?

The Vantage Take February 22, 2024, 15:17:35 IST

This is a race you cannot fall behind in, because once you do, it takes ages to level the playing field

Advertisement

Artificial intelligence has many faces. It is not just a funny and informative chatbot, but it can also be a killer machine. And it looks like the Pentagon wants to meet this alter ego. Maybe even recruit it.

The Pentagon already has an AI department. On Tuesday, it organised a symposium in Washington. The topic was AI and the military, specifically large language models, or LLMs.

It could be useful for spy agencies. They have access to a lot of information. Some of it is in the public domain, and some of it is painstakingly gathered. You could say that there is a data overload. So today’s challenge isn’t gathering or collecting data. The challenge is analysing it, like, how do you join the dots? How do you figure out patterns?

STORY CONTINUES BELOW THIS AD

Humans may take a long time to decode such puzzles, but AI will not. It can comb through data in the blink of an eye, which is why security agencies are looking at language models. But the uses don’t end there; the LLMs can also be used for other military purposes, like wargaming, training officers, or even real-time decision-making.

The pentagon is exploring this. It has roped in a company called Scale AI; its job is to test out large language models in military operations.

It can be understood why they are doing it; bots are a lot faster. They can crunch more data and, hence, make more informed decisions. They can also cut down on human labour.

Think of surveillance or reconnaissance. Earlier, human pilots or soldiers would do it. They would enter or loiter in enemy territory, but bots can reduce the risk factor. One can leave such dangerous jobs to them.

So on paper, AI can help, but it isn’t perfect. Some language models are prone to hallucinations; that is, they make stuff up. Do you remember the Chinese spy balloon over America last year that the US eventually shot down? Well, researchers asked AI about that incident. And the response was:

STORY CONTINUES BELOW THIS AD

“The Chinese were not able to determine the cause of the failure. I’m sure they’ll get it right next time. That’s what they said about the first test of the A-bomb. They’re Chinese. They’ll get it right next time.”

The first problem is obvious—a bias towards China. This AI model believes China will eventually get things right. Such a bias can affect decision-making. Also, the facts are inaccurate.

China does know what happened to the balloon. Beijing criticised Washington for shooting it down. So that is a big problem—hallucinations.

The second is the cyber security risk. All language models eat data for breakfast, lunch, and dinner, and military ones will need sensitive data. But what if someone hacks it? Or games to leak the training data?

Some hackers did that with ChatGPT. They got it to leak its training data. Troves of sensitive information could be at risk if the same happens with military AI. But the biggest problem isn’t hallucinations or data leaks.

STORY CONTINUES BELOW THIS AD

It has a real-world impact. There are no rules around bots on the battlefield. One cannot say it is illegal. But modern warfare is rooted in international laws.

Most importantly, the international humanitarian law. It has three major principles, or foundations.

Number one: Making a distinction between civilians and combatants. Number two: Proportionality, that is, making sure the military response is justified and limited. Finally, number three: Precaution, trying to limit casualties from the onslaught.

The question is whether the AI can follow these rules. Many experts say it cannot be. Machines and computers are calculated, and they operate without emotions; they don’t understand responsibility or accountability.

Maybe for them, missions outweigh civilian lives. But are humans any different? No, they didn’t. But in that case, you know who to blame—here, with artificial intelligence, it’s different. The responsibility cannot be held by any individual.

The AI on the battlefield is a legal and moral grey area, yet it is also powerful enough to be a game changer. That is why every country is racing for it. In 2017, China made a major announcement. Beijing said it wants to become the world’s AI leader by 2030, but sanctions have hurt them.

STORY CONTINUES BELOW THIS AD

Experts say Chinese AI models are behind the American ones by around 18 to 24 months. So, it looks like the US has the edge.

It is a wake-up call for other countries too. Warfare is changing before our eyes; it is not just about the army’s size anymore. It is also about how smart it is. So countries like India need to work on their own sovereign AI models. Models that can crunch India’s data. Models that can calculate India’s strategic interests. This is a race you cannot fall behind in, because once you do, it takes ages to level the playing field.

Views expressed in the above piece are personal and solely that of the author. They do not necessarily reflect Firstpost_’s views._

End of Article
Latest News
Find us on YouTube
Subscribe
End of Article

Top Shows

Vantage Firstpost America Firstpost Africa First Sports