Ray Seilie recently weighed in on major developments unfolding in AI copyright litigation for IndieWire, NPR and Slate. In IndieWire’s article about Disney and NBCUniversal’s lawsuit against AI firm Midjourney, which has the potential to set a precedent for how AI firms can train their models, Ray emphasizes that a complete agreement between major studios and creatives is a rare find in the entertainment industry—making this case all the more consequential for artists.
“It’s going to be an important case that’ll affect the rights held by almost all creatives, regardless of how large they are,” Ray tells IndieWire. “It’s a rare alliance in the legal industry, or the entertainment legal industry, where you see studios actually doing something that artists are 100 percent behind.”
In NPR, Ray analyzes recent rulings involving Meta and Anthropic’s AI training practices. In a high-profile San Francisco case against Meta (Richard Kadrey, et al., v. Meta Platforms, Inc.), the district judge found its systems’ use of written materials fell under fair use and was “transformative.” Similarly, less than two days later, another San Francisco district judge overseeing Anthropic’s AI legal battle against a group of authors (Bartz v. Anthropic) found that the group did not provide enough evidence of market dilution resulting from the AI’s training practices, and the use of their written works also fell under fair use.
“ These rulings are going to help tech companies and copyright holders to see where judges and courts are likely to go in the future,” Ray shares with NPR. “ I think they can be seen as a victory for the AI community writ large because they create a precedent suggesting that AI companies can use legally-obtained material to train their models.”
In Slate, Ray discusses the Bartz v. Anthropic case further, explaining that while AI chatbots are barred from generating responses that exactly copy or significantly resemble authors’ works used in their training data under fair use, filters that prevent this from occurring while still utilizing their content to train the software are allowed.
“The Anthropic LLM implements filters so that if you have a user who asks for basically an entire work, the LLM is not going to give them that,” Ray notes in Slate.