AI models can produce poems that rate well on certain ‘metrics’. But the event of reading poetry is not one in which we arrive at standardised outcomes.
Current LLMs generate poems that people prefer to human-written poetry. Current image generators win art contests. They don’t need to get better to produce more appealing art than humans. Maybe not every time, maybe the people writing the prompts and filtering results are inherent to producing quality results, but there’s not some extra trick needed for people to find their outputs aesthetically appealing.
This research didn’t use a million poems, it used 5 human and 5 generated poems. The 5 generated poems were simply the first five generated, they did not use a human to curate from a larger population.
It’s “infinite monkeys on infinite typewriters” because a million would be far too small a sample size to expect Shakespeare. The monkeys aren’t trying to make anything, they’re just randomly hitting keys. For Shakespeare to come out, there would likely need to be more Monkeys than there are atoms in the universe. Conversely, we’re getting something people enjoy from AI right now. No need to approach infinity. It’s not what most people wanted AI to be used for, but it’s succeeding at it, and current models have only been around for a few years. This isn’t random chance happening upon something we like - this is a pattern-recognizing machine getting progressively better at recognizing the patterns we enjoy.
Yes, because ‘these monkeys’ have been reading all of available content humans created, not really fair comparison to infinite scale of pure randomness.
I would argue against pattern machines getting better at recognizing patterns better, but I don’t think it would change any minds.
Yes, I agree it’s a bad comparison. That’s why I said as such in my response to your comment that brought it up.
Though the current models have only been around for a few years, pattern recognition programs have been around for a long time. The latest ones are just a better model …because they are getting better.
The monkeys are just random chance - if you don’t yet have Shakespeare, you’re no more likely to get it than when you started - but pattern recognition software is steadily improving. If it’s not at some benchmark you want it to be at, it’s at least closer than it was 10 years ago, and will continue getting closer over time.
Yes, technically you are correct.
But current technology will not do that. LLMs are not going to get much better, we need more complex setups.
Current LLMs generate poems that people prefer to human-written poetry. Current image generators win art contests. They don’t need to get better to produce more appealing art than humans. Maybe not every time, maybe the people writing the prompts and filtering results are inherent to producing quality results, but there’s not some extra trick needed for people to find their outputs aesthetically appealing.
It’s like the old ‘million monkeys on million typewriters will eventually write Shakespear’.
This research didn’t use a million poems, it used 5 human and 5 generated poems. The 5 generated poems were simply the first five generated, they did not use a human to curate from a larger population.
It’s “infinite monkeys on infinite typewriters” because a million would be far too small a sample size to expect Shakespeare. The monkeys aren’t trying to make anything, they’re just randomly hitting keys. For Shakespeare to come out, there would likely need to be more Monkeys than there are atoms in the universe. Conversely, we’re getting something people enjoy from AI right now. No need to approach infinity. It’s not what most people wanted AI to be used for, but it’s succeeding at it, and current models have only been around for a few years. This isn’t random chance happening upon something we like - this is a pattern-recognizing machine getting progressively better at recognizing the patterns we enjoy.
Yes, because ‘these monkeys’ have been reading all of available content humans created, not really fair comparison to infinite scale of pure randomness.
I would argue against pattern machines getting better at recognizing patterns better, but I don’t think it would change any minds.
Yes, I agree it’s a bad comparison. That’s why I said as such in my response to your comment that brought it up.
Though the current models have only been around for a few years, pattern recognition programs have been around for a long time. The latest ones are just a better model …because they are getting better.
The monkeys are just random chance - if you don’t yet have Shakespeare, you’re no more likely to get it than when you started - but pattern recognition software is steadily improving. If it’s not at some benchmark you want it to be at, it’s at least closer than it was 10 years ago, and will continue getting closer over time.