Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-19-2018, 01:17 PM   #101
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
A brain characteristics can be translated in a formal system.
At least one mental capacity can’t be translated in a formal system, as our mind can decide on propositions that would cause a halt problem in a formal system.
I don't think that's valid.

In the first place, if the brain is a physical system, it is describable by various physical theories, and ultimately (so far as we know) by quantum mechanics. Quantum mechanics uses arithmetic, and arithmetic is a formal system, and in fact is the prototype of a formal system that contains undecidable propositions (I believe there are very simple formal systems for which such problems are not an issue). Therefore there are undecidable propositions about the brain.

But the concept of a "formal system" and that of a "computer" both ultimately derive from the effort to characterize what can be shown logically, by providing rigorous models of what a human logician is capable of. Therefore, to the best of our knowledge, anything that a human being can prove, or decide logically, can be decided by a formal system; if something can't be decided by a formal system, then a human being can't decide it logically.

My personal model (here we enter the realm of speculation) is that when a human being "decides" something, what they are doing is projecting a possible action, and its future consequences; assessing their desirability (or "utility," though I'm skeptical about the actual existence of utility); and then either accepting them as sufficiently desirable, or going back, choosing a different course of action, and doing the same process, iteratively. This is clearly a self-referential process, and therefore can give rise to the same kinds of paradox that Gödel dealt with. And in fact, there seem to be situations where people CANNOT make decisions, where in effect they oscillate back and forth between two options, finding each unacceptable (like the series 1, -1, 1, -1, ...), or even enter a divergent series of possible future outcomes. The phenomena of "free will," such as "I can't predict the future" and "I can always do the opposite of what you predicted I would do," seem to arise from this very property of self-referentiality, and from language making us capable of it.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is online now   Reply With Quote
Old 03-19-2018, 01:32 PM   #102
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
A brain characteristics can be translated in a formal system.
At least one mental capacity can’t be translated in a formal system, as our mind can decide on propositions that would cause a halt problem in a formal system.
That's only a problem if the proposition is expressed within the same system as is doing the deciding.
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Old 03-19-2018, 03:16 PM   #103
malloyd
 
Join Date: Jun 2006
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
A brain characteristics can be translated in a formal system.
At least one mental capacity can’t be translated in a formal system, as our mind can decide on propositions that would cause a halt problem in a formal system.
I don't think that's true. Minds don't actually *solve* those sorts of problems, they simply stop working on them. I can write that behavior into a computer program perfectly well - pick a maximum amount of resources to be devoted to this problem before you start and stop when you solve it or they run out.
__________________
--
MA Lloyd
malloyd is online now   Reply With Quote
Old 03-19-2018, 05:32 PM   #104
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Anthony View Post
That's only a problem if the proposition is expressed within the same system as is doing the deciding.
Yes. The only solution to the computability problem are infinite metalevels, and this is not viable with finite resources, that is, it’s not possible to reduce to a formal system.

Quote:
Originally Posted by malloyd View Post
I don't think that's true. Minds don't actually *solve* those sorts of problems, they simply stop working on them. I can write that behavior into a computer program perfectly well - pick a maximum amount of resources to be devoted to this problem before you start and stop when you solve it or they run out.
That’s a good try, but it doesn’t solve the problem. Once built such program, there is at least one input bringing no halt.

Quote:
Originally Posted by whswhs View Post
I don't think that's valid.

In the first place, if the brain is a physical system, it is describable by various physical theories, and ultimately (so far as we know) by quantum mechanics. Quantum mechanics uses arithmetic, and arithmetic is a formal system, and in fact is the prototype of a formal system that contains undecidable propositions (I believe there are very simple formal systems for which such problems are not an issue). Therefore there are undecidable propositions about the brain.

But the concept of a "formal system" and that of a "computer" both ultimately derive from the effort to characterize what can be shown logically, by providing rigorous models of what a human logician is capable of. Therefore, to the best of our knowledge, anything that a human being can prove, or decide logically, can be decided by a formal system; if something can't be decided by a formal system, then a human being can't decide it logically.

My personal model (here we enter the realm of speculation) is that when a human being "decides" something, what they are doing is projecting a possible action, and its future consequences; assessing their desirability (or "utility," though I'm skeptical about the actual existence of utility); and then either accepting them as sufficiently desirable, or going back, choosing a different course of action, and doing the same process, iteratively. This is clearly a self-referential process, and therefore can give rise to the same kinds of paradox that Gödel dealt with. And in fact, there seem to be situations where people CANNOT make decisions, where in effect they oscillate back and forth between two options, finding each unacceptable (like the series 1, -1, 1, -1, ...), or even enter a divergent series of possible future outcomes. The phenomena of "free will," such as "I can't predict the future" and "I can always do the opposite of what you predicted I would do," seem to arise from this very property of self-referentiality, and from language making us capable of it.
Actually we can decide that a proposition is true even if it is unprovable, something that a formal system can’t do on itself but just on a subsystem. To be more precise: some formal system can prove every theorem formalizable from its axioms, but such systems cannot axiomatize the whole mathematics. When we design a set of axioms that can encompass the whole mathematics (possibly by means of axioms subsets), then we get at least one undecidable proposition.

The mind can both encompass the whole mathematics and at the same time decide on the truth value of any unprovable proposition, so it cannot be reproduced by a formal system.

This makes your argument invalid.
Ji ji is offline   Reply With Quote
Old 03-19-2018, 05:42 PM   #105
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
Yes. The only solution to the computability problem are infinite metalevels, and this is not viable with finite resources, that is, it’s not possible to reduce to a formal system.
Incorrect. You can make statements about a closed system from outside the system without any special requirements.
Quote:
Originally Posted by Ji ji View Post
That’s a good try, but it doesn’t solve the problem. Once built such program, there is at least one input bringing no halt.
No, there's at least one input bringing no answer. Not the same thing.
Quote:
Originally Posted by Ji ji View Post
The mind can both encompass the whole mathematics and at the same time decide on the truth value of any unprovable proposition, so it cannot be reproduced by a formal system.
The mind, being finite, cannot encompass a system that includes itself.
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Old 03-19-2018, 05:51 PM   #106
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
The mind can both encompass the whole mathematics and at the same time decide on the truth value of any unprovable proposition, so it cannot be reproduced by a formal system.

This makes your argument invalid.
You are ignoring my qualification "decide it logically."

And also, what do you mean by "the whole mathematics"? Back when I was working on GURPS Who's Who, I read the claim that John von Neumann was the last human being to understand the entirety of mathematics; and mathematics has grown exponentially since his death. So I'm not sure if there is any entity now existing that can do what you describe.
__________________
Bill Stoddard

I don't think we're in Oz any more.

Last edited by whswhs; 03-19-2018 at 06:09 PM.
whswhs is online now   Reply With Quote
Old 03-19-2018, 05:52 PM   #107
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

@Anthony

1. Exactly. You need to be outside the system x, and of course this is another system (x). The system (x) has the same problem, so you need a further system ((x)) to make statements. This is a recursive problem that brings infinite metalevels, as you will always need a further system.

2. It’s very interesting, I would be glad if you can elaborate (when you have time and will).

3. Then, maybe, we will never be able to create a mind. If we will be, then it will be through a revolution in science that we can’t yet imagine.
Ji ji is offline   Reply With Quote
Old 03-19-2018, 05:58 PM   #108
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
@Anthony

1. Exactly. You need to be outside the system x, and of course this is another system (x). The system (x) has the same problem, so you need a further system ((x)) to make statements. This is a recursive problem that brings infinite metalevels, as you will always need a further system.
Only if you care about the external levels being able to answer questions about themselves. There are plenty of questions a human cannot answer.
Quote:
Originally Posted by Ji ji View Post
3. Then, maybe, we will never be able to create a mind. If we will be, then it will be through a revolution in science that we can’t yet imagine.
You're making the assumption that we need to be able to fully comprehend a mind to create one. We don't, as long as we aren't trying to contain it within a human's memory.
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Old 03-19-2018, 06:04 PM   #109
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
You are ignoring my qualification "decide it logically."
Not really. Sorry if I have been unclear. We can decide logically whether an undecidable proposition is true or false, something that a formal system can’t.

Of course by “formal system” I am referring not only to a specific formal axiomatic system, but to any set of formal systems. We can add layers to solve the problem from outside the system, but the problem is identical in the new metasystem, so we need further levels to infinite. In order to compact the levels we can design a further system which can generate (that is, contains) all the infinite levels, but we would be back to the starting point - there will be unprovable proposition and we will need further, infinite systems.
Ji ji is offline   Reply With Quote
Old 03-19-2018, 06:05 PM   #110
Anaraxes
 
Join Date: Sep 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
Actually we can decide that a proposition is true even if it is unprovable
But you can't do so logically, by proof. Certainly you can simply assume that a proposition is true, arbitrarily. But then, so could a formal system built to do so.

An "undecidable" proposition isn't universally undecidable. It's only undecidable within the context of a specific formal system. And it's trivially easy to create a formal system that can decide that undecidable problem: you just add it to the list of axioms for your new system. As you said, you just arbitrarily decide that the proposition is true. (Or false, if you prefer.) Quite easily done with most AI systems. A program is itself just data, after all.

The pitfall is exactly the same as if your supposedly superior human mind makes an arbitrary assumption. You quite possibly will create other inconsistencies and problems you'll discover later on. Maybe those bother you more than the original undecidable problem, or maybe not. The minds of humans are riddled with inconsistencies and contradictions, yet they mostly struggle on anyway -- as do buggy computer programs.

AIs aren't going to be subject to the Captain Kirk attack just because they're programs. You're not going to confound them simply by asking them to calculate pi to the last digit or throwing the Epimenides paradox at them. Much like humans, they'll just inspect the problem and say "huh, that looks like a paradox" or "gee, that'll take longer than I want to waste doing that".

Much like humans, that ability doesn't mean that every possible problem becomes decidable. Just because you can decide a problem not decidable in some other system doesn't mean you've reached a higher level of cognition. It just means you're a slightly different formal system -- with undecidable problems of your own, not necessarily the same ones as other formal system, nor ones arranged in a nested hierarchy.
Anaraxes is online now   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 08:28 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.