03-19-2018, 01:17 PM | #101 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
In the first place, if the brain is a physical system, it is describable by various physical theories, and ultimately (so far as we know) by quantum mechanics. Quantum mechanics uses arithmetic, and arithmetic is a formal system, and in fact is the prototype of a formal system that contains undecidable propositions (I believe there are very simple formal systems for which such problems are not an issue). Therefore there are undecidable propositions about the brain. But the concept of a "formal system" and that of a "computer" both ultimately derive from the effort to characterize what can be shown logically, by providing rigorous models of what a human logician is capable of. Therefore, to the best of our knowledge, anything that a human being can prove, or decide logically, can be decided by a formal system; if something can't be decided by a formal system, then a human being can't decide it logically. My personal model (here we enter the realm of speculation) is that when a human being "decides" something, what they are doing is projecting a possible action, and its future consequences; assessing their desirability (or "utility," though I'm skeptical about the actual existence of utility); and then either accepting them as sufficiently desirable, or going back, choosing a different course of action, and doing the same process, iteratively. This is clearly a self-referential process, and therefore can give rise to the same kinds of paradox that Gödel dealt with. And in fact, there seem to be situations where people CANNOT make decisions, where in effect they oscillate back and forth between two options, finding each unacceptable (like the series 1, -1, 1, -1, ...), or even enter a divergent series of possible future outcomes. The phenomena of "free will," such as "I can't predict the future" and "I can always do the opposite of what you predicted I would do," seem to arise from this very property of self-referentiality, and from language making us capable of it.
__________________
Bill Stoddard I don't think we're in Oz any more. |
|
03-19-2018, 01:32 PM | #102 |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
That's only a problem if the proposition is expressed within the same system as is doing the deciding.
|
03-19-2018, 03:16 PM | #103 |
Join Date: Jun 2006
|
Re: No AI/No Supercomputers: Complexity Limits?
I don't think that's true. Minds don't actually *solve* those sorts of problems, they simply stop working on them. I can write that behavior into a computer program perfectly well - pick a maximum amount of resources to be devoted to this problem before you start and stop when you solve it or they run out.
__________________
-- MA Lloyd |
03-19-2018, 05:32 PM | #104 | |||
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Quote:
Quote:
The mind can both encompass the whole mathematics and at the same time decide on the truth value of any unprovable proposition, so it cannot be reproduced by a formal system. This makes your argument invalid. |
|||
03-19-2018, 05:42 PM | #105 | ||
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
Quote:
The mind, being finite, cannot encompass a system that includes itself. |
||
03-19-2018, 05:51 PM | #106 | |
Join Date: Jun 2005
Location: Lawrence, KS
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
And also, what do you mean by "the whole mathematics"? Back when I was working on GURPS Who's Who, I read the claim that John von Neumann was the last human being to understand the entirety of mathematics; and mathematics has grown exponentially since his death. So I'm not sure if there is any entity now existing that can do what you describe.
__________________
Bill Stoddard I don't think we're in Oz any more. Last edited by whswhs; 03-19-2018 at 06:09 PM. |
|
03-19-2018, 05:52 PM | #107 |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
@Anthony
1. Exactly. You need to be outside the system x, and of course this is another system (x). The system (x) has the same problem, so you need a further system ((x)) to make statements. This is a recursive problem that brings infinite metalevels, as you will always need a further system. 2. It’s very interesting, I would be glad if you can elaborate (when you have time and will). 3. Then, maybe, we will never be able to create a mind. If we will be, then it will be through a revolution in science that we can’t yet imagine. |
03-19-2018, 05:58 PM | #108 | |
Join Date: Feb 2005
Location: Berkeley, CA
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
You're making the assumption that we need to be able to fully comprehend a mind to create one. We don't, as long as we aren't trying to contain it within a human's memory. |
|
03-19-2018, 06:04 PM | #109 |
Join Date: Feb 2012
|
Re: No AI/No Supercomputers: Complexity Limits?
Not really. Sorry if I have been unclear. We can decide logically whether an undecidable proposition is true or false, something that a formal system can’t.
Of course by “formal system” I am referring not only to a specific formal axiomatic system, but to any set of formal systems. We can add layers to solve the problem from outside the system, but the problem is identical in the new metasystem, so we need further levels to infinite. In order to compact the levels we can design a further system which can generate (that is, contains) all the infinite levels, but we would be back to the starting point - there will be unprovable proposition and we will need further, infinite systems. |
03-19-2018, 06:05 PM | #110 | |
Join Date: Sep 2007
|
Re: No AI/No Supercomputers: Complexity Limits?
Quote:
An "undecidable" proposition isn't universally undecidable. It's only undecidable within the context of a specific formal system. And it's trivially easy to create a formal system that can decide that undecidable problem: you just add it to the list of axioms for your new system. As you said, you just arbitrarily decide that the proposition is true. (Or false, if you prefer.) Quite easily done with most AI systems. A program is itself just data, after all. The pitfall is exactly the same as if your supposedly superior human mind makes an arbitrary assumption. You quite possibly will create other inconsistencies and problems you'll discover later on. Maybe those bother you more than the original undecidable problem, or maybe not. The minds of humans are riddled with inconsistencies and contradictions, yet they mostly struggle on anyway -- as do buggy computer programs. AIs aren't going to be subject to the Captain Kirk attack just because they're programs. You're not going to confound them simply by asking them to calculate pi to the last digit or throwing the Epimenides paradox at them. Much like humans, they'll just inspect the problem and say "huh, that looks like a paradox" or "gee, that'll take longer than I want to waste doing that". Much like humans, that ability doesn't mean that every possible problem becomes decidable. Just because you can decide a problem not decidable in some other system doesn't mean you've reached a higher level of cognition. It just means you're a slightly different formal system -- with undecidable problems of your own, not necessarily the same ones as other formal system, nor ones arranged in a nested hierarchy. |
|
|
|