Skip to main content

AI Action Summit in Paris

Right after the conclusion of the AI Action Summit in Paris, Anthropic’s co-founder and CEO Dario Amodei called the event a “missed opportunity.” He emphasized that “greater focus and urgency is needed on several topics given the pace at which the technology is progressing” in the statement released on Tuesday.

The Event and its Purpose

The AI company held a developer-focused event in Paris in partnership with French startup Dust, and TechCrunch had the opportunity to interview Amodei on stage. At the event, he explained his line of thought and defended a third path that’s neither pure optimism nor pure criticism on the topics of AI innovation and governance, respectively.

The Need for Greater Focus

Amodei stated that he used to be a neuroscientist, where he basically looked inside real brains for a living. Now, we’re looking inside artificial brains for a living. So we will, over the next few months, have some exciting advances in the area of interpretability — where we’re really starting to understand how the models operate.

The Pace of Progress

Amodei acknowledged that it’s a race. It’s a race between making the models more powerful, which is incredibly fast for us and incredibly fast for others — you can’t really slow down, right? … Our understanding has to keep up with our ability to build things. I think that’s the only way.

Shift in AI Governance Discussion

Since the first AI summit in Bletchley in the U.K., the tone of the discussion around AI governance has changed significantly. It is partly due to the current geopolitical landscape.

Emphasis on Opportunity

U.S. Vice President JD Vance said at the AI Action Summit on Tuesday, “I’m here to talk about AI opportunity.” Interestingly, Amodei is trying to avoid this antagonization between safety and opportunity. In fact, he believes an increased focus on safety is an opportunity.

Avoiding Antagonization

Amodei explained that at the original summit, the U.K. Bletchley Summit, there were a lot of discussions on testing and measurement for various risks. And I don’t think these things slowed down the technology very much at all. If anything, doing this kind of measurement has helped us better understand our models, which in the end, helps us produce better models.

Continued Focus on Safety

And every time Amodei puts some emphasis on safety, he also likes to remind everyone that Anthropic is still very much focused on building frontier AI models.

Balancing Opportunity and Safety

Amodei said, “I don’t want to do anything to reduce the promise. We’re providing models every day that people can build on and that are used to do amazing things. And we definitely should not stop doing that.”

Avoiding Annoyance

When people are talking a lot about the risks, Amodei kind of gets annoyed, and he says: “oh, man, no one’s really done a good job of really laying out how great this technology could be.”

DeepSeek’s Training Costs

When the conversation shifted to Chinese LLM-maker DeepSeek’s recent models, Amodei downplayed the technical achievements and said he felt like the public reaction was “inorganic.”

Public Reaction to DeepSeek

Amodei explained that his reaction was very little. We had seen V3, which is the base model for DeepSeek R1, back in December. And that was an impressive model. The model that was released in December was on this kind of very normal cost reduction curve that we’ve seen in our models and other models.

Geopolitical Concerns

What was notable is that the model wasn’t coming out of the “three or four frontier labs” based in the U.S. He listed Google, OpenAI and Anthropic as some of the frontier labs that generally push the envelope with new model releases. And that was a matter of geopolitical concern to me. I never wanted authoritarian governments to dominate this technology.

DeepSeek’s Training Costs: Misconception

As for DeepSeek’s supposed training costs, he dismissed the idea that training DeepSeek V3 was 100x cheaper compared to training costs in the U.S. “I think [it] is just not accurate and not based on facts.”

Upcoming Claude Models with Reasoning

While Amodei didn’t announce any new model at Wednesday’s event, he teased some of the company’s upcoming releases — and yes, it includes some reasoning capacities.

Claude Models: Advancements

Amodei explained that Anthropic is generally focused on trying to make their own take on reasoning models that are better differentiated. We worry about making sure we have enough capacity, that the models get smarter, and we worry about safety things.

Model Selection Conundrum

One of the issues that Anthropic is trying to solve is the model selection conundrum. If you have a ChatGPT Plus account, for instance, it can be difficult to know which model you should pick in the model selection pop-up for your next message.

Balancing Accuracy, Speed, and Costs

The same is true for developers using large language model (LLM) APIs for their own applications. They want to balance things out between accuracy, speed of answers and costs.

Smooth Transition

Amodei stated that we should have a smoother transition from that to pre-trained models — rather than ‘here’s thing A and here’s thing B.’ He emphasized that Anthropic really wants to move things in that direction.

AI Opportunity

As large AI companies like Anthropic continue to release better models, Amodei believes it will open up some great opportunities to disrupt the large businesses of the world in every industry.

Success Stories

We’re working with some pharma companies to use Claude to write clinical studies, and they’ve been able to reduce the time it takes to write the clinical study report from 12 weeks to three days. Amodei concluded that there’s going to be — basically — a renaissance of disruptive innovation in the AI application space. And we want to help it, we want to support it all.

Full Coverage

Read our full coverage of the Artificial Intelligence Action Summit in Paris.


Source Link