Skip to main content

In response to the Trump administration’s call for a national “AI Action Plan,” Google has followed OpenAI’s lead by releasing a policy proposal. The tech giant is advocating for lenient copyright restrictions on AI training, as well as “balanced” export controls that strike a balance between protecting national security and enabling U.S. exports and global business operations.

According to Google, the United States needs to adopt an active international economic policy that promotes American values and supports AI innovation globally. The company believes that AI policymaking has historically focused too much on the risks, while neglecting the potential costs of misguided regulation on innovation, national competitiveness, and scientific leadership. This mindset is beginning to shift under the new administration, Google notes.

One of the more contentious aspects of Google’s proposal is its stance on the use of IP-protected material.

The company argues that exceptions for “fair use and text-and-data mining” are essential for AI development and related scientific innovation. Similar to OpenAI, Google is seeking to codify the right to train on publicly available data, including copyrighted data, with minimal restrictions. This approach is intended to facilitate AI development while minimizing the impact on rightsholders.

Google contends that these exceptions enable the use of copyrighted, publicly available material for AI training without significantly affecting rightsholders. This approach also avoids the often lengthy and unpredictable negotiations with data holders that can occur during model development or scientific experimentation.

Google, which has reportedly trained various models on public, copyrighted data, is currently involved in lawsuits with data owners who accuse the company of failing to notify and compensate them before doing so. The issue of whether fair use doctrine shields AI developers from IP litigation remains unresolved in U.S. courts.

Google’s proposal also takes issue with certain export controls imposed by the Biden administration, which the company believes “may undermine economic competitiveness goals” by imposing disproportionate burdens on U.S. cloud service providers. This stance contrasts with statements from competitors like Microsoft, which has expressed confidence in its ability to comply with the rules.

The export rules in question aim to limit the availability of advanced AI chips in certain countries, but carve out exemptions for trusted businesses seeking large clusters of chips.

In other parts of its proposal, Google advocates for sustained investments in domestic R&D, pushing back against recent federal efforts to reduce spending and eliminate grant awards. The company suggests that the government should release datasets that could be useful for commercial AI training and allocate funding to “early-market R&D” while ensuring that computing and models are widely available to scientists and institutions.

Google also urges the government to pass federal legislation on AI, including a comprehensive privacy and security framework, citing the chaotic regulatory environment created by the U.S.’ patchwork of state AI laws. With over 780 pending AI bills in the U.S. as of early 2025, the need for a unified framework is becoming increasingly pressing.

The company cautions the U.S. government against imposing what it perceives to be overly burdensome obligations around AI systems, such as usage liability obligations. In many cases, Google argues, the developer of a model has limited visibility or control over how the model is being used and should not be held responsible for misuse.

Historically, Google has opposed laws like California’s defeated SB 1047, which outlined precautions AI developers should take before releasing a model and specified cases in which developers might be held liable for model-induced harms.

According to Google, even when a developer provides a model directly to deployers, the deployers are often best positioned to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging.

Google’s proposal also criticizes disclosure requirements like those being considered by the EU as “overly broad,” and argues that the U.S. government should oppose transparency rules that require divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models.

A growing number of countries and states have implemented laws requiring AI developers to disclose more information about how their systems work. For example, California’s AB 2013 mandates that companies developing AI systems publish a high-level summary of the datasets used to train their systems. In the EU, companies will be required to provide detailed instructions on the operation, limitations, and risks associated with their models to comply with the AI Act.


Source Link