Skip to main content

A recent study conducted by the AI Disclosures Project has cast doubt on the data used by OpenAI to train its large language models (LLMs), suggesting that the company’s GPT-4o model exhibits a “strong recognition” of paywalled and copyrighted content from O’Reilly Media books.

The AI Disclosures Project, a research initiative led by technologist Tim O’Reilly and economist Ilan Strauss, seeks to mitigate the potentially detrimental societal implications of AI commercialization by advocating for enhanced corporate and technological transparency. The project’s working paper draws parallels between the lack of disclosure in AI and financial disclosure standards, emphasizing the importance of robust transparency in fostering trust and accountability.

To investigate whether OpenAI’s LLMs were trained on copyrighted data without consent, the researchers utilized a legally obtained dataset of 34 copyrighted O’Reilly Media books. They employed the DE-COP membership inference attack method to determine whether the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions.

The study’s key findings include:

  • The GPT-4o model demonstrates a strong recognition of paywalled O’Reilly book content, with an AUROC score of 82%, in contrast to OpenAI’s earlier model, GPT-3.5 Turbo, which exhibits a significantly lower level of recognition (AUROC score just above 50%)
  • GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples, with AUROC scores of 82% and 64%, respectively
  • In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones, with AUROC scores of 64% and 54%, respectively
  • The smaller GPT-4o Mini model shows no knowledge of public or non-public O’Reilly Media content when tested, with an AUROC score of approximately 50%

The researchers suggest that potential access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found to be available on the platform. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not diminish the method’s ability to classify data.

The study highlights the potential for “temporal bias” in the results due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.

The report notes that while the evidence is specific to OpenAI and O’Reilly Media books, it is likely indicative of a broader systemic issue surrounding the use of copyrighted data. The authors argue that uncompensated training data usage could lead to a decline in the internet’s content quality and diversity as revenue streams for professional content creation diminish.

The AI Disclosures Project emphasizes the need for greater accountability in AI companies’ model pre-training processes. The researchers suggest that liability provisions that incentivize improved corporate transparency in disclosing data provenance may be an essential step towards facilitating commercial markets for training data licensing and remuneration.

The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders are aware of when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data.

Despite evidence suggesting that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.

The report concludes by stating that, using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.

(Image by Sergei Tokmakov)

See also: Anthropic provides insights into the ‘AI biology’ of Claude

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events, including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.


Source Link