More than 600 employees at Google have signed an open letter urging the company to reject a potential agreement with the US Department of Defense that could allow its artificial intelligence systems to be used in classified military operations, raising fresh concerns over the role of AI in warfare and surveillance.
The letter, addressed to Google chief executive Sundar Pichai, warns that AI technologies could be deployed in ways that may cause serious harm, including mass surveillance and the development of autonomous weapons systems. It states that employees want artificial intelligence to be used for public benefit rather than in “inhumane or extremely harmful ways,” extending concerns beyond lethal autonomous weapons to broader civil liberties risks.
Signatories include staff from across Google DeepMind, Google Cloud, and other divisions, as well as more than 20 senior managers and executives. The statement comes as Google negotiates with the Pentagon over the possible use of its Gemini AI model in classified defence environments.
One employee involved in organising the letter said the lack of transparency surrounding classified military applications makes oversight extremely difficult. They warned that without public accountability, AI systems could potentially be used for activities such as profiling individuals or targeting civilians, without meaningful safeguards.
The dispute highlights growing tensions between major technology companies and defence agencies over how advanced AI tools should be governed. Similar concerns have surfaced across the industry, including a high-profile dispute involving AI company Anthropic, which previously challenged Pentagon requests for unrestricted access to its systems. Anthropic’s leadership argued that certain military uses of AI could undermine democratic principles and exceed safe technological limits.
Following that disagreement, US political figures reportedly moved to restrict government use of some AI tools, underscoring the sensitivity of the issue.
Within Google, employees say the company has discussed contractual restrictions that would prevent its AI systems from being used for domestic mass surveillance or autonomous weapons without human oversight. However, according to staff involved in the discussions, the Pentagon has pushed for broader “all lawful uses” language, arguing it needs flexibility in operational settings. Employees have raised concerns that such wording could weaken practical safeguards.
The latest letter also echoes earlier internal protests at Google, including a 2018 campaign that led the company to withdraw from Project Maven, a Pentagon programme that used AI to analyse drone footage. That episode sparked a wider debate inside Silicon Valley about the ethical limits of defence-related AI work.
The employees behind the current letter say their position remains clear: Google should not be involved in building warfare technologies. They are calling for a formal policy ensuring the company and its contractors do not develop systems intended for military use in combat or surveillance operations.
