cross-posted from: https://programming.dev/post/8121669
Japan determines copyright doesn’t apply to LLM/ML training data.
On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.
I am going to live in the sea.
www.biia.com/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/
I think this is a difficult concept to tackle, but the main argument I see about using existing works as ‘training data’ is the idea that ‘everything is a remix’.
I, as a human, can paint an exact copy of a Picasso work or any other artist. This is not illegal and I have no need of a license to do this. I definitely don’t need a license to paint something ‘in the style of Picasso’, and I can definitely sell it with my own name on it.
But the question is, what about when a computer does the same thing? What is the difference? Speed? Scale? Anyone can view a picture of the Mona Lisa at any time and make their own painting of it. You can’t use the image of the Mona Lisa without accreditation and licensing, but what about a recreation of the Mona Lisa?
I’m not really arguing pro-AI here, although it may sound like it. I’ve just heard the ‘licensing’ argument many times and I’d really like to hear what the difference between a human copying and a computer copying are, if someone knows more about the law.
Um - your examples are so old the copyright expired centuries ago. Of course you can copy them. And you can absolutely use an image of the Mona Lisa without accreditation or licensing.
Painting and selling an exact copy of a recent work, such as Banksy, is a crime.
… however making an exact copy of Banksy for personal use, or to learn, or to teach other people, or copying the style… that’s all perfectly legal.
I don’t think think this is a black and white issue. Using AI to copy something might be a crime. You absolutely can use it to infringe on copyright. The real question is who’s at fault? I would argue the person who asked the AI to create the copy is at fault - not the company running the servers.
deleted
Huh? What does being non profit have to do with it? Private companies are allowed to learn from copyrighted work. Microsoft and Apple, for example, look at each other’s software and copy ideas (not code, just ideas) all the time. The fact Linux is non-profit doesn’t give them any additional rights or protection.
They’re not gatekeeping llms though, there are publicly available models and data sets.
If it’s publicly available, why didn’t Microsoft just download and use it rather than paying them for a partnership?
(And where at?)
IIRC they only open-sourced some old stuff.
Stability diffusion is open source. You can run local instances with provided and free training sets to query against and generate your own outputs.
https://stability.ai/
Thanks for your response. I realize I muddied the waters on my question by mentioning exact copies.
My real question is based on the ‘everything is a remix’ idea. I can create a work ‘in the style of Banksy’ and sell it. The US copyright and trademark laws state that a work only has to be 10% differentiated from the original in order to be legal to use, so creating a piece of work that ‘looks like it could have been created by Banksy, but was not created by Banksy’ is legal.
So since most AI does not create exact copies, this is where I find the licensing argument possibly weak. I really haven’t seen AI like MidJourney creating exact replicas of works - but admittedly, I am not following every single piece of art created on Midjourney, or Stable Diffusion, or DALL-E, or any of the other platforms, and I’m not an expert in the trademarking laws to the extent I can answer these questions.
I’m pretty sure the law doesn’t say that. The Blurred Lines copyright case for example was far less than 10%. Probably less than 1%, and it was still unclear if it was infringement or not - it took five years of lawsuits to come to a conclusion. Which was a split decision - the first court found it to be infringing then an appeals panel of judges reached a split decision where the majority of them found it to be non-infringing, so even after five years there wasn’t a clear ruling on wether or not it was copyright infringement.
Copyright is incredibly complex and unclear.
I don’t have a source to cite, but I did read an article that showed a bad faith actor deliberately trying to use ai to copy images directly, and while the results weren’t exact replicas, they were reasonable facsimiles of the original, to the extent that if a human has created it without ai, it would have been blatant copyright infringement, despite not being quite identical.
I wish I had the examples on hand to show, but it was months ago, and unfortunately I have not the skills nor time to retrieve it.
To be at fault the user would have to know the AI creation they distributed commits copyright infringement. How can you tell? Is everyone doing months of research to be vaguely sure it’s not like someone else’s work?
Even if you had an AI trained on only public domain assets you could still end up putting in the words that generate something copyrighted.
Companies created a random copyright infringement tool for users to randomly infringe copyright.
The same way you can tell if you repainted a Banksy yourself. If you don’t realize, and monetize, then you are liable for a copyright lawsuit regardless of the way you created the piece in question.
And if noone can detect similarities beyond influences, then it’s not infringing anything.
You may recognize a Banksy but to another it’s like I said you aught to know your work is like one from Coinsey: who?
This is exasperated when people can create creative works via AI, having even less knowledge about your peers who know how to DIY. A potentially life-ruining lawsuit is a bad system to find out you can’t monetize something.
If only there was some way to find out prior to selling stuff as if you made it. If only. Darn it!
I don’t understand. If I make something that doesn’t mean I’m not infringing someone’s works.
Point: regardless of the HOW it was made, the process of figuring if it infringes on something is the same. It’s still not always easy and due to the shittyness of current IP laws, even long time professional artists sometimes make mistakes.
In the end it’s just about money.
I am familiar with SEGA owning a software patent on Crazy Taxi’s “arrow above car points where to go” because my interests in creating games happened to lead me to an article stating such.
That seems related to HOW my works are made, to me. I know of no other way to find that out.
Your example is a dude who paints unsolicited on other people’s property. What kind of copyright does a ghost have?
A surprising amount, though it would potentially be quite difficult to prove.
I should paint some shit on your house and then sue you for displaying it.
Here’s the thing… Generative AI had a plagiarism/remix phase. It raised some serious questions about copyright
It lasted for a matter of weeks.
We’re all still stuck up on it, but go to civit.ai
Play with it. Look at what people are creating.
If you’re not convinced, put up a bounty for something extremely specific
Art has changed. There’s no putting it back in the bottle, this is the tiniest leading edge of the singularity
Just a small warning, I just played around with civit. Tried to make some Images, also wanted to try to make some nsfw images. Anyway be really careful what you prompt, I accidentally generated some images with very young people I never intended.