There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK
And there is this 2020 study by… OpenAI… which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they’re predictable to a fault LINK
There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK
This all lends to the idea that AI lacks the same Emergent behavior in Human Language.
Hey, would you have a reference for this? I’d love to read it. Does it apply to deep neural nets? And/or recurrent NNs?
There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK
And there is this 2020 study by… OpenAI… which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they’re predictable to a fault LINK
There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK
This all lends to the idea that AI lacks the same Emergent behavior in Human Language.