Its a definition, but not an effective one in the sense that we can test and recognize it.
Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible.
Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.
But we know too little about whether the limits of the turing machine are also limits of human cognition.
Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)
The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.
No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?
Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?
Logic, from which follows the incompleteness theorem, reified in material reality as cause and effect. Instead of completeness you could throw out soundness (that is, throw out cause and effect) but now the physicists are after you because you made them fend off even more Boltzmann brains. There is theory on hypercomputation but all it really boils down to is “if incomputable inputs are allowed, then we can compute the incomputable”. It should be called reasoning modulo oracles.
Or, put bluntly: Claiming that brains are legit hypercomputers amounts to saying that humanity is supernatural, as in aphysical. Even if that were the case, what would hinder an AI from harnessing the same supernatural phenomenon? The gods?
You say an incompleteness theorem implies that brains are computable? Then you consider the possibility of them being hypercomputers? What is this?
Im not saying brains are hypercomputers, just that we dont know if thats the case.
If you think that would be “supernatural”, ok, i dont mind. And i dont object to the possibility of eventually having AI on hypercomputers. All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.
You say an incompleteness theorem implies that brains are computable?
No, I’m saying that incompleteness implies that either cause and effect does not exist, or there exist incomputable functions. That follows from considering the universe, or its collection of laws, as a logical system, which are all bound by the incompleteness theorem once they reach a certain expressivity.
All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.
Adequate in which sense? Architecturally, of course not, and neither would be lambda calculus or other common models. I’m not talking about specific abstract machines, though, but Turing-completeness, that is, the property of the set of all abstract machines that are as computationally powerful as Turing machines, and can all simulate each other. Those are a dime a gazillion.
Or, see it this way: Imagine a perfect, virtual representation of a human brain stored on an ordinary computer. That computer is powerful enough to simulate all physical laws relevant to the functioning of a human brain… it might take a million years to simulate a second of brain time, but so be it. Such a system would be AGI (for ethically dubious values of “artificial”). That is why I say the “whether” is not the question: We know it is possible. We’ve in fact done it for simpler organisms. The question is how to do it with reasonable efficiency, and that requires an understanding of how the brain does the computations it does so we can mold it directly into silicon instead of going via several steps of one machine simulating another machine, each time incurring simulation overhead from architectural mismatch.
Ok. So nothing you said backs the claim that “logic” implies that the brain cannot be using some uncomputable physical phenomenon, and so be uncomputable.
I’m not sure about what you mean by “cause and effect” existing. Does it mean that the universe follows a set of laws?
If cause and effect exists, the disjunction you said is implied by the incompleteness theorem entails that there are uncomputable functions, which I take to mean that there are uncomputable oracles in the physical world.
But i still find suspicious your use of incompleteness. We take the set of laws governing the universe and turn it into a formal system. How? Does the resulting formal system really meet all conditions of the incompleteness theorem? Expressivity is just one of many conditions. Even then, the incompleteness theorem says we can’t effectively axiomatize the system… so what?
Adequate in which sense?
I dont mean just architecturally,
the turing machine wouldnt be adequate to model the brain in the sense that the brain, in that hypothetical scenario, would be a hypercomputer, and so by definition could not be simulated by a turing machine.
As simple as that. My statement there was almost a tautology.
entails that there are uncomputable functions, which I take to mean that there are uncomputable oracles in the physical world.
It means that there are functions that are not computable. You cannot, for example, write a program that decides, in finite time, whether an arbitrary program halts on a particular input. If you doubt that, have an easy-going explanation of the proof by diagonalisation.
We take the set of laws governing the universe and turn it into a formal system. How?
Ask a physicist, that’s their department not mine. Also I’d argue that the universe itself is a formal system, and lots of physicists would agree they’re onto the whole computability and complexity theory train. They may or may not agree to the claim that computer science is more fundamental than physics, we’re still working on that one.
Does the resulting formal system really meet all conditions of the incompleteness theorem?
Easily, because it will have to express the natural numbers. Have a Veritasium video on the whole thing. The two results (completeness and in computability) are fundamentally linked.
he turing machine wouldnt be adequate to model the brain in the sense that the brain, in that hypothetical scenario, would be a hypercomputer,
If the brain is a hypercomputer then, as already said, you’re not talking physics any more, you’re in the realms of ex falso quodlibet.
Hypercomputers are just as impossible as a village barber who shaves everyone in the village who does not shave themselves: If the barber shaves himself, then he doesn’t shave himself. If he shaves himself, then he doesn’t shave himself. Try to imagine a universe in which that’s not a paradox, that’s the kind of universe you’re claiming we’re living in when you’re claiming that hypercomputers exist.
you mention a lot of theory that does exist, but your arguments make no sense. You might want to study the incompleteness theorems more in depth before continuing to cite them like that. The book Godels proof by Nagel and Newman is a good start to go beyond these youtube expositions.
I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.
As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.
I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.
I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.
Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.
Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)
The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.
No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?
Logic, from which follows the incompleteness theorem, reified in material reality as cause and effect. Instead of completeness you could throw out soundness (that is, throw out cause and effect) but now the physicists are after you because you made them fend off even more Boltzmann brains. There is theory on hypercomputation but all it really boils down to is “if incomputable inputs are allowed, then we can compute the incomputable”. It should be called reasoning modulo oracles.
Or, put bluntly: Claiming that brains are legit hypercomputers amounts to saying that humanity is supernatural, as in aphysical. Even if that were the case, what would hinder an AI from harnessing the same supernatural phenomenon? The gods?
You say an incompleteness theorem implies that brains are computable? Then you consider the possibility of them being hypercomputers? What is this?
Im not saying brains are hypercomputers, just that we dont know if thats the case. If you think that would be “supernatural”, ok, i dont mind. And i dont object to the possibility of eventually having AI on hypercomputers. All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.
No, I’m saying that incompleteness implies that either cause and effect does not exist, or there exist incomputable functions. That follows from considering the universe, or its collection of laws, as a logical system, which are all bound by the incompleteness theorem once they reach a certain expressivity.
Adequate in which sense? Architecturally, of course not, and neither would be lambda calculus or other common models. I’m not talking about specific abstract machines, though, but Turing-completeness, that is, the property of the set of all abstract machines that are as computationally powerful as Turing machines, and can all simulate each other. Those are a dime a gazillion.
Or, see it this way: Imagine a perfect, virtual representation of a human brain stored on an ordinary computer. That computer is powerful enough to simulate all physical laws relevant to the functioning of a human brain… it might take a million years to simulate a second of brain time, but so be it. Such a system would be AGI (for ethically dubious values of “artificial”). That is why I say the “whether” is not the question: We know it is possible. We’ve in fact done it for simpler organisms. The question is how to do it with reasonable efficiency, and that requires an understanding of how the brain does the computations it does so we can mold it directly into silicon instead of going via several steps of one machine simulating another machine, each time incurring simulation overhead from architectural mismatch.
Ok. So nothing you said backs the claim that “logic” implies that the brain cannot be using some uncomputable physical phenomenon, and so be uncomputable.
I’m not sure about what you mean by “cause and effect” existing. Does it mean that the universe follows a set of laws? If cause and effect exists, the disjunction you said is implied by the incompleteness theorem entails that there are uncomputable functions, which I take to mean that there are uncomputable oracles in the physical world. But i still find suspicious your use of incompleteness. We take the set of laws governing the universe and turn it into a formal system. How? Does the resulting formal system really meet all conditions of the incompleteness theorem? Expressivity is just one of many conditions. Even then, the incompleteness theorem says we can’t effectively axiomatize the system… so what?
I dont mean just architecturally, the turing machine wouldnt be adequate to model the brain in the sense that the brain, in that hypothetical scenario, would be a hypercomputer, and so by definition could not be simulated by a turing machine. As simple as that. My statement there was almost a tautology.
It means that there are functions that are not computable. You cannot, for example, write a program that decides, in finite time, whether an arbitrary program halts on a particular input. If you doubt that, have an easy-going explanation of the proof by diagonalisation.
Ask a physicist, that’s their department not mine. Also I’d argue that the universe itself is a formal system, and lots of physicists would agree they’re onto the whole computability and complexity theory train. They may or may not agree to the claim that computer science is more fundamental than physics, we’re still working on that one.
Easily, because it will have to express the natural numbers. Have a Veritasium video on the whole thing. The two results (completeness and in computability) are fundamentally linked.
If the brain is a hypercomputer then, as already said, you’re not talking physics any more, you’re in the realms of ex falso quodlibet.
Hypercomputers are just as impossible as a village barber who shaves everyone in the village who does not shave themselves: If the barber shaves himself, then he doesn’t shave himself. If he shaves himself, then he doesn’t shave himself. Try to imagine a universe in which that’s not a paradox, that’s the kind of universe you’re claiming we’re living in when you’re claiming that hypercomputers exist.
you mention a lot of theory that does exist, but your arguments make no sense. You might want to study the incompleteness theorems more in depth before continuing to cite them like that. The book Godels proof by Nagel and Newman is a good start to go beyond these youtube expositions.
I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.
As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.
I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.
I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.