Here’s a question that can throw a generative AI company into a twist: “What content has been used to train your models?” While some opt to dodge the question, and others bullishly front out the issue entirely, the question of whether an AI company has scraped content for its own business purposes without permission is a thorny one.
At best, you’re likely to get a mealy-mouthed explanation of “curated datasets”, and at worst, a polemic about whether everything on the internet is essentially fair game.
Now a document obtained by 404media appears to show that part of the data used to train Runway’s latest AI video generation tool, Gen-3, may have come from the YouTube channels of thousands of popular media companies, including Pixar, Netflix, Disney and Sony.
While 404media doesn’t go into details as to how the document was obtained, nor could it verify that every video mentioned within was used to train Gen-3, it’s potentially an insight into the sort of practices that an AI company might use to scrape copyrighted material to train its models.
A former Runway employee spoke to 404media about the methodology involved. The 14 spreadsheets contained within the leaked document are said to feature terms like “beach” or “rain”, with the names of Runway employees next to them.
According to the source, these names were said to be employees tasked with finding videos or channels related to these keywords, who would then go on to use a YouTube video downloader tool via a proxy to scrape them from the site without being blocked by Google.
It’s not just YouTube content that looks to have been scraped, either. A spreadsheet containing 14 links to non-YouTube sources, including a link to a website dedicated to streaming popular cartoons and animated movies, with thousands of copyright complaints logged against it.
Essentially, pirated media looks to have been at least under consideration for training data, if not directly scraped and used.
404media actually went one step further, and attempted to use Gen-3 to generate video using prompts that contained keywords based on the terms found in the spreadsheet, and was able to create clips that looked to be very much in the same style as the associated content.
Runway was itself part-funded by Google, among others, so scraping content without permission from creators on its platforms, if true, is likely to land it in significant hot water. Never mind the potential wider legal repercussions.
Still, while the issue of AI content theft is a thorny one, the model does still appear to have issues. Ars Technica tried creating some videos recently with Gen-3 Alpha, and it gave a cat a pair of human hands. I’m not sure what content was used to train that particular version of the model, but I’d suggest that no matter the methodology used here, it could do with some work one way or the other.