Each of the tokens programs like ChatGPT are trained on and informed by represents a tiny, tiny piece of material that someone else created. And those authors are not credited for it, paid for it or asked permission for its use. In a sense, these machine-learning bots are actually the most advanced form of a chop shop: They use material without their creators’ permission, cut itl into parts so small that no one can trace them and then repurpose them to form new products.
That’s true of anything anyone reads. If I turn around and personally use the same argument you just made, you’re not being compensated. If I repeat it on social media and it goes viral, you’re still not compensated.
One place where we might agree: a singular service or collection of similar services being able to replicate the entire population of humanity reading, digesting, and redistributing these concepts and ideas is a bit different than it has been throughout the pre-ML part of human history. You would need some sort of singular individual with an impressive photographic memory, and the ability to answer questions from millions of people simultaneously - which clearly is not going to happen. So we just haven’t had to deal with it yet. I would agree that is an important distinction.
But the original premise: some agent is able to take my words, distill them into something else, and utilize that distillation to form a model of the world to be re transmitted is identical to humans communicating and thinking. I’m a big proponent of sharing information, so to me this is a net positive - in fact, it turbocharges my hopes for an “information wants to be free” society.
But if you want to keep your knowledge, wisdom, and communications behind lock and key and charge for entry, this would absolutely be the opposite of what you want. You’re absolutely allowed to want what you want! But I’m also allowed to want what I want too, which is why I won’t be joining team ban-computational-modeling
Absolutely. And I am absolutely joining the team ‘computational modeling’, too. I don’t reject this. What I say is that we might need different economic and legal models making sure that everyone can take advantage of this new tech rather than just a few.
To give a an example: If a tech company uses billions of data for free to train its model on but then claims the copyright for the result, it would certainly increase inequality. For example, a lot of photographers or writers wouldn’t earn much money anymore, as their work could just be ‘created’ by some AI.
So I don’t join the ‘ban-computational-modeling’ team, I just want to see that it is a technology for everyone. Otherwise we will see just a few more tech billionaires while the mass of people is paying the bill as it happened so often in human history.