China’s military lab AI connects to commercial large language models for the first time to learn more about humans
- Chinese researchers are using commercial ChatGPT-like systems to teach military AI how to face unpredictable human enemies
- Baidu says it has no affiliation with the institution in question, and there is no direct link between the two
Baidu said on Saturday that it “has no affiliation or other partnership with the academic institution in question”.
“We have no knowledge of the research project, and if our large language model was used, it would have been the version publicly available online,” the company added.
The military AI can convert vast amounts of sensor data and information reported by frontline units into descriptive language or images and relay them to the commercial models. After they confirm they understand, the military AI automatically generates prompts for deeper exchange on various tasks such as combat simulations. The entire process is completely free of human involvement.
The project was detailed in a peer-reviewed paper published in December 2023 in the Chinese academic journal, Command Control & Simulation. In the paper, project scientist Sun Yifeng and his team from the PLA’s Information Engineering University wrote that both humans and machines could benefit from the project.
“The simulation results assist human decision-making ... and can be used to refine the machine’s combat knowledge reserve and further improve the machine’s combat cognition level,” they wrote.
This is the first time the Chinese military has publicly confirmed its use of commercial large language models. For security reasons, military information facilities are generally not directly connected to civilian networks. Sun’s team did not give details in the paper of the link between the two systems, but stressed that this work was preliminary and for research purposes.
Sun and his colleagues said their goal was to make military AI more “humanlike”, better understanding the intentions of commanders at all levels and more adept at communicating with humans.
And when facing cunning and unpredictable human enemies, machines can be deceived. However, commercial large language models, which have studied almost all aspects of society, including literary works, news reports and historical documents, may help military AI gain a deeper understanding of people.
In the paper, Sun’s team discussed one of their experiments that simulated a US military invasion of Libya in 2011. The military AI provided information about the weapons and deployment of both armies to the large language models. After several rounds of dialogue, the models successfully predicted the next move of the US military.
Sun’s team claimed that such predictions could compensate for human weaknesses. “As the highest form of life, humans are not perfect in cognition and often have persistent beliefs, also known as biases,” Sun’s team wrote in the paper. “This can lead to situations of overestimating or underestimating threats on the battlefield. Machine-assisted human situational awareness has become an important development direction.”
In response, the team experimented with multi-modal communication methods. One such approach involved military AI creating a detailed military map, which was then given to iFlyTek’s Spark for deeper analysis. Researchers found that this illustrative approach significantly improved the performance of the large language models, enabling them to produce analysis reports and predictions that met practical application requirements.
Sun acknowledged in the paper that what his team disclosed was only the tip of the iceberg of this ambitious project. Some important experiments, such as how military and commercial models can learn from past failures and mutually acquire new knowledge and skills, were not detailed in the paper.
China, US agree on AI risks, but can they see past military tech rivalry?
But a Beijing-based computer scientist warned that while the military application of AI was inevitable, it warranted extreme caution.
The present generation of large language models was more powerful and sophisticated than ever, posing potential risks if given unrestricted access to military networks and confidential equipment knowledge, said the scientist who requested not to be named due to the sensitivity of the issue.
“We must tread carefully. Otherwise, the scenario depicted in the Terminator movies may really come true,” he said.