Advertisement
Advertisement
Science
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Chinese researchers have tested connecting military AI to ChatGPT-like systems to teach its machines how to face human enemies. Photo: EPA-EFE

China’s military lab AI connects to commercial large language models for the first time to learn more about humans

  • Chinese researchers are using commercial ChatGPT-like systems to teach military AI how to face unpredictable human enemies
  • Baidu says it has no affiliation with the institution in question, and there is no direct link between the two
Science
Chinese scientists are teaching an experimental military artificial intelligence (AI) more about facing unpredictable human enemies with the help of ChatGPT-like technologies.
According to scientists involved in the project, a research laboratory with the People’s Liberation Army’s (PLA) Strategic Support Force, which oversees the Chinese military’s space, cyber, intelligence and electronic warfare, has tested its AI system on Baidu’s Ernie and iFlyTek’s Spark, which are large language models similar to ChatGPT.

Baidu said on Saturday that it “has no affiliation or other partnership with the academic institution in question”.

“We have no knowledge of the research project, and if our large language model was used, it would have been the version publicly available online,” the company added.

The military AI can convert vast amounts of sensor data and information reported by frontline units into descriptive language or images and relay them to the commercial models. After they confirm they understand, the military AI automatically generates prompts for deeper exchange on various tasks such as combat simulations. The entire process is completely free of human involvement.

Meanwhile, one computer scientist has voiced concerns over the move, saying that unless it is handled carefully, it could lead to a situation similar to that depicted in the Terminator films.

05:03

How does China’s AI stack up against ChatGPT?

How does China’s AI stack up against ChatGPT?

The project was detailed in a peer-reviewed paper published in December 2023 in the Chinese academic journal, Command Control & Simulation. In the paper, project scientist Sun Yifeng and his team from the PLA’s Information Engineering University wrote that both humans and machines could benefit from the project.

“The simulation results assist human decision-making ... and can be used to refine the machine’s combat knowledge reserve and further improve the machine’s combat cognition level,” they wrote.

This is the first time the Chinese military has publicly confirmed its use of commercial large language models. For security reasons, military information facilities are generally not directly connected to civilian networks. Sun’s team did not give details in the paper of the link between the two systems, but stressed that this work was preliminary and for research purposes.

Sun and his colleagues said their goal was to make military AI more “humanlike”, better understanding the intentions of commanders at all levels and more adept at communicating with humans.

Most existing military AI is based on traditional war gaming systems. Although their abilities have progressed rapidly, they often feel more like a machine than a living being to users.

And when facing cunning and unpredictable human enemies, machines can be deceived. However, commercial large language models, which have studied almost all aspects of society, including literary works, news reports and historical documents, may help military AI gain a deeper understanding of people.

While the researchers have said their work can benefit both machines and humans, one computer scientist not linked to the project has warned caution is needed to avoid a Terminator-like situation. Photo: Paramount Pictures

In the paper, Sun’s team discussed one of their experiments that simulated a US military invasion of Libya in 2011. The military AI provided information about the weapons and deployment of both armies to the large language models. After several rounds of dialogue, the models successfully predicted the next move of the US military.

Sun’s team claimed that such predictions could compensate for human weaknesses. “As the highest form of life, humans are not perfect in cognition and often have persistent beliefs, also known as biases,” Sun’s team wrote in the paper. “This can lead to situations of overestimating or underestimating threats on the battlefield. Machine-assisted human situational awareness has become an important development direction.”

Sun’s team also said there were still some issues in the communication between military and commercial models, as the latter were not specifically developed for warfare. For instance, Ernie’s forecasts are sometimes vague, giving only a broad outline of attack strategies without the specifics that military commanders need.

In response, the team experimented with multi-modal communication methods. One such approach involved military AI creating a detailed military map, which was then given to iFlyTek’s Spark for deeper analysis. Researchers found that this illustrative approach significantly improved the performance of the large language models, enabling them to produce analysis reports and predictions that met practical application requirements.

Sun acknowledged in the paper that what his team disclosed was only the tip of the iceberg of this ambitious project. Some important experiments, such as how military and commercial models can learn from past failures and mutually acquire new knowledge and skills, were not detailed in the paper.

China, US agree on AI risks, but can they see past military tech rivalry?

China is not the only country conducting such research. Many generals from various US military branches have publicly expressed interest in ChatGPT and similar technologies and tasked corresponding military research institutions and defence contractors to explore the possible applications of generative AI in US military operations, such as intelligence analysis, psychological warfare, drone control and communication code decryption.

But a Beijing-based computer scientist warned that while the military application of AI was inevitable, it warranted extreme caution.

The present generation of large language models was more powerful and sophisticated than ever, posing potential risks if given unrestricted access to military networks and confidential equipment knowledge, said the scientist who requested not to be named due to the sensitivity of the issue.

“We must tread carefully. Otherwise, the scenario depicted in the Terminator movies may really come true,” he said.

10