Google’s AI Principles Undergo Substantive Changes
Google has made one of the most significant changes to its AI principles since first publishing them in 2018. A change spotted by The Washington Post has revealed that the search giant has edited the document to remove pledges it had made promising not to "design or deploy" AI tools for use in weapons or surveillance technology.
Changes to the AI Principles
The previous version of the AI principles included a section titled "applications we will not pursue," which is no longer present in the current version of the document. Instead, there’s now a section titled "responsible development and deployment." Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights."
Broader Commitment
This change represents a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website. For instance, as it relates to weapons, the company previously said it would not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people." As for AI surveillance tools, the company said it would not develop tech that violates "internationally accepted norms."
Changes to the Document
A screenshot of the previous version of Google’s AI Principles is available. The current version of the document can be viewed here.
Response from Google
When asked for comment, a Google spokesperson pointed Engadget to a blog post the company published on Thursday. In it, DeepMind CEO Demis Hassabis and James Manyika, senior vice president of research, labs, technology and society at Google, say AI’s emergence as a "general-purpose technology" necessitated a policy change.
New Commitment
"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," the two wrote. "… Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights — always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks."
Background
When Google first published its AI principles in 2018, it did so in the aftermath of Project Maven, a controversial government contract that, had Google decided to renew it, would have seen the company provide AI software to the Department of Defense for analyzing drone footage. Dozens of Google employees quit the company in protest of the contract, with thousands more signing a petition in opposition.
Recent Developments
By 2021, however, Google began pursuing military contracts again, with what was reportedly an aggressive bid for the Pentagon’s Joint Warfighting Cloud Capability cloud contract. At the start of this year, The Washington Post reported that Google employees had repeatedly worked with Israel’s Defense Ministry to expand the government’s use of AI tools.
Conclusion
Google’s AI principles have undergone a significant change, with the company removing pledges it had made promising not to "design or deploy" AI tools for use in weapons or surveillance technology. The new commitment represents a broader approach to AI development, with a focus on responsible development and deployment.
Source Link