Skip to main content

The Risks of Artificial General Intelligence
Former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang have co-authored a paper titled "Superintelligence Strategy," which cautions against the US government initiating a Manhattan Project for Artificial General Intelligence (AGI). They argue that such a project could lead to a potentially uncontrollable situation, with adversaries retaliating or sabotaging each other’s efforts as countries compete to develop the most advanced AI capabilities. Instead, the authors suggest that the US should focus on developing methods to disable threatening AI projects, such as cyberattacks.

The Dangers of an AI Arms Race
Schmidt and Wang are strong advocates for the potential benefits of AI in areas like drug development and workplace efficiency. However, they are concerned that governments are prioritizing the development of AI for defense purposes, leading to a potentially catastrophic arms race. They believe that international agreements, similar to those governing nuclear weapons, should be established to regulate the development of AI. The authors are worried that the pursuit of AI-powered killing machines will ultimately lead to devastating consequences, similar to the dangers posed by nuclear weapons.

The Role of Silicon Valley in AI Development
Interestingly, both Schmidt and Wang are involved in developing AI products for the defense sector. Schmidt’s company, White Stork, is working on autonomous drone technologies, while Wang’s Scale AI has signed a contract with the Department of Defense to create AI "agents" for military planning and operations. This highlights the complex relationships between Silicon Valley, the military, and the development of AI. As companies like Anduril, founded by Palmer Luckey, supply AI-powered drones to countries like Ukraine, the lines between defense and offense become increasingly blurred.

The Military-Industrial Complex
The military-industrial complex has a vested interest in promoting kinetic warfare, even when it may not be morally justifiable. Other countries have their own military industrial complexes, and the US feels pressured to maintain its own to stay competitive. However, this can lead to a cycle of violence and harm to innocent people. As Anduril’s recent ad campaign suggests, working for the military-industrial complex is being rebranded as a countercultural movement.

The Risks of AI- Assisted Decision Making
Schmidt and Wang argue that humans should always be involved in AI-assisted decision-making processes. However, recent reports have shown that the Israeli military is already relying on faulty AI programs to make life-or-death decisions. The use of drones and AI-powered warfare raises concerns about the potential for mistakes and unintended consequences. Image recognition AI, in particular, is notorious for its inaccuracies, and the development of killer drones could lead to imprecise targeting and harm to civilians.

The Assumptions and Limitations of AI
The paper by Schmidt and Wang assumes that AI will soon become "superintelligent," capable of performing tasks as well as or better than humans. However, this assumption may be overstated, as current AI models are still prone to errors and unpredictable behavior. Companies like OpenAI, led by Sam Altman, have been criticized for making exaggerated claims about the risks of AI, which some see as an attempt to influence policy and gain power.

The Future of AI Development
As President Trump drops Biden-era guidelines around AI safety and pushes for the US to become a dominant force in AI, Schmidt’s warnings may fall on deaf ears. The Congressional commission’s proposal for a Manhattan Project for AI, which Schmidt warns against, may gain traction. If countries like China retaliate by degrading models or attacking physical infrastructure, the consequences could be severe. In this context, the idea of sabotaging AI projects to defend against them may become a viable option. Ultimately, it remains unclear how the world can come to an agreement to stop the development of these potentially destructive technologies.


Source Link