Skip to main content

FutureHouse, a nonprofit organization backed by Eric Schmidt, has launched its first major product, a platform and API with AI-powered tools designed to support scientific work, as part of its goal to build an “AI scientist” within the next decade.

The development of AI research tools for the scientific domain is a highly competitive field, with numerous startups, some with substantial VC funding, racing to create innovative solutions. Additionally, tech giants such as Google have expressed their enthusiasm for AI in science, unveiling the “AI co-scientist,” an AI designed to aid scientists in creating hypotheses and experimental research plans.

The CEOs of OpenAI and Anthropic have stated that AI tools have the potential to significantly accelerate scientific discovery, particularly in fields like medicine. However, many researchers remain skeptical about the ability of AI to guide the scientific process due to its unreliability and limitations in areas such as out-of-the-box problem-solving.

FutureHouse has released four AI tools: Crow, Falcon, Owl, and Phoenix. These tools are designed to support various aspects of scientific research, including searching scientific literature, conducting deeper literature searches, and planning chemistry experiments.

According to FutureHouse, its AI tools have access to a vast corpus of high-quality open-access papers and specialized scientific tools, which sets them apart from other AI systems. The company also emphasizes the transparent reasoning and multi-stage process used by its AI tools to consider each source in more depth. By combining these tools, scientists can potentially accelerate the pace of scientific discovery.

However, it is worth noting that FutureHouse has yet to achieve a scientific breakthrough or make a novel discovery with its AI tools, highlighting the challenges in developing an effective “AI scientist”.

The development of an “AI scientist” is complicated by the need to anticipate and address numerous confounding factors. While AI may be useful in areas that require broad exploration, its ability to solve complex problems and lead to genuine breakthroughs is still uncertain.

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

To date, the results from AI systems designed for science have been underwhelming. For example, in 2023, Google reported that one of its AIs had helped synthesize around 40 new materials, but an outside analysis found that none of these materials were actually new.

The technical shortcomings and risks associated with AI, such as its tendency to hallucinate, also raise concerns among scientists about the reliability of AI for serious scientific work.

FutureHouse acknowledges that its AI tools, particularly Phoenix, may make mistakes and is encouraging users to provide feedback to improve the tools. The company’s approach is centered around rapid iteration and continuous improvement.

“We are releasing [this] now in the spirit of rapid iteration,” the company writes in its blog post. “Please provide feedback as you use it.”




Source Link