I don’t even understand what the theory is. Plastic is plastic. What does it matter if it’s attached to the bottle?
One of the cofounders of partizle.com, a Lemmy instance primarily for nerds and techies.
Into Python, travel, computers, craft beer, whatever
I don’t even understand what the theory is. Plastic is plastic. What does it matter if it’s attached to the bottle?
Well, that’s always been the case with Skid Row, though it might be debatable which came first – the homeless encampments or the aid agencies. And for that matter, there were Hoovervilles in the Great Depression. In any city in America, there are transients milling around the shelters, which is why there’s so much NIMBYism over developing new shelters.
But what’s going on in California probably has more to do with the fact that LA and San Francisco tend to be very tolerant of the homeless encampments and provide generous aid, thus inducing demand. The homeless population is soaring across America for various reasons, but California is a desirable place to be homeless: better aid, better climate, softer police, etc.
Maybe California’s big cities really are more humane and generous, but at this point it’s to the detriment of livability in those places.
It sort of depends on where you are, but in San Francisco and Los Angeles, the homeless problem is noticeably worse than almost anywhere else in America. It’s bad.
An ex of mine lives in a pretty posh part of LA (Crestview). She works constantly and really hard to afford to live there. Now there are people literally shooting heroin on the street outside her home and to take her toddler to play at the park, they’re basically walking around the bodies of people high/sleeping.
I mean, I’m as anti-drug war as they come, but that’s no way to live and the police really should clear it out. Even in the poorer parts of most other cities, that’s not something you see.
Yeah, that’s basically it.
But I think what’s getting overlooked in this conversation is that it probably doesn’t matter whether it’s AI or not. Either new content is derivative or it isn’t. That’s true whether you wrote it or an AI wrote it.
If I created a web app that took samples from songs created by Metallica, Britney Spears, Backstreet Boys, Snoop Dogg, Slayer, Eminem, Mozart, Beethoven, and hundreds of other different musicians, and allowed users to mix all these samples together into new songs, without getting a license to use these samples, the RIAA would sue the pants off of me faster than you could say “unlicensed reproduction.”
The RIAA is indeed a litigious organization, and they tend to use their phalanx of lawyers to extract anyone who does anything creative or new into submission.
But sampling is generally considered fair use.
And if the algorithm you used actually listened to tens of thousands of hours of music, and fed existing patterns into a system that creates new patterns, well, you’d be doing the same thing anyone who goes from listening to music to writing music does. The first song ever written by humans was probably plagiarized from a bird.
It wouldn’t matter, because derivative works require permission. But I don’t think anyone’s really made a compelling case that OpenAI is actually making directly derivative work.
The stronger argument is that LLM’s are making transformational work, which is normally fair use, but should still require some form of compensation given the scale of it.
Her lawsuit doesn’t say that. It says,
when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works
That’s an absurd claim. ChatGPT has surely read hundreds, perhaps thousands of reviews of her book. It can summarize it just like I can summarize Othello, even though I’ve never seen the play.
I haven’t been able to reproduce that, and at least so far, I haven’t seen any very compelling screenshots of it that actually match. Usually it just generates text, but that text doesn’t actually match.
If you say “AI read my book and output a similar story, you owe me money” then how is that different from “Joe read my book and wrote a similar story, you owe me money.”
You’re bounded by the limits of your flesh. AI is not. The $12 you spent buying a book at Barns & Noble was based on the economy of scarcity that your human abilities constrain you to.
It’s hard to say that the value proposition is the same for human vs AI.
A better comparison would probably be sampling. Sampling is fair use in most of the world, though there are mixed judgments. I think most reasonable people would consider the output of ChatGPT to be transformative use, which is considered fair use.
No, it isn’t. There are enumerated rights a copyright grants the holder a monopoly over. They are reproduction, derivative works, public performances, public displays, distribution, and digital transmission.
Commercial vs non-commercial has nothing to do with it, nor does field of endeavor. And aside from the granted monopoly, no other rights are granted. A copyright does not let you decide how your work is used once sold.
I don’t know where you guys get these ideas.
The published summary is open to fair use by web crawlers. That was settled in Perfect 10 v Amazon.
Derivative and transformative are quite different though.
I very much agree.
The thing is, copyright isn’t really well-suited to the task, because copyright concerns itself with who gets to, well, make copies. Training an AI model isn’t really making a copy of that work. It’s transformative.
Should there be some kind of new model of renumeration for creators? Probably. But it should be a compulsory licensing model.
If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I’d get my ass sued. I have to buy the books and the scientific papers.
Well, if OpenAI knowingly used pirated work, that’s one thing. It seems pretty unlikely and certainly hasn’t been proven anywhere.
Of course, they could have done so unknowingly. For example, if John C Pirate published the transcripts of every movie since 1980 on his website, and OpenAI merely crawled his website (in the same way Google does), it’s hard to make the case that they’re really at fault any more than Google would be.
Yes. I do. And I’m right.
There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.
That’s part of the allegation, but it’s unsubstantiated. It isn’t entirely coherent.
You meet them online, but they’re a vocal minority. Especially when a smaller phone means a smaller battery and worse camera system, two of the consistently top priorities for consumers.