Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-17-2018, 01:11 PM   #61
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Yes, though not entirely out of the question, because true intelligence often has multiple applications. Sir Francis Burton was a poet, fiction writer, sword fighter and adventurer (as well as a world-class attention whore). Without true intelligence, though, that kind of adaptability is almost ruled out.
Is this the guy who translated A Thousand and One Nights? I had thought his given name was "Richard."
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 01:16 PM   #62
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Yes, though not entirely out of the question, because true intelligence often has multiple applications. Sir Francis Burton was a poet, fiction writer, sword fighter and adventurer (as well as a world-class attention whore). Without true intelligence, though, that kind of adaptability is almost ruled out.
Actually, it would be a bunch easier on a computer, and AI would probably be a hindrance. Computers are way better at task switching than humans.
__________________
My GURPS site and Blog.
Anthony is online now   Reply With Quote
Old 03-17-2018, 03:32 PM   #63
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
Is this the guy who translated A Thousand and One Nights? I had thought his given name was "Richard."
Richard Francis Burton.
Quote:
Originally Posted by Anthony View Post
Actually, it would be a bunch easier on a computer, and AI would probably be a hindrance. Computers are way better at task switching than humans.
Multi-tasking is basically impossible, but that may not be as true for an AI as it is for a human. You could possibly have linked mini-minds (compartmentalized mind in GURPS terms) that can split and coordinate tasks.
VonKatzen is offline   Reply With Quote
Old 03-17-2018, 04:23 PM   #64
Ulzgoroth
 
Join Date: Jul 2008
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Pseudo-AI is another conversation worth having - but I think the more pseudo-intelligence you want a computer to have the more you have to specialize it. I can believe a good computer killing machine, or a good computer telescope data analyst, but one that does both stretches credibility.
This is nonsense. If you can do each, doing both is utterly trivial - you just have both programs and switch between them depending on which you currently need. (Or run them in parallel if they aren't competing for the same I/O channels and you've got the processor and memory capacity.)
__________________
I don't know any 3e, so there is no chance that I am talking about 3e rules by accident.
Ulzgoroth is offline   Reply With Quote
Old 03-17-2018, 05:38 PM   #65
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
I think my position may actually be almost the reverse. Just as a fire is a physical process whose characteristic activities are to burn things and radiate heat and light, a brain is a physical system whose characteristic activity is to make a body capable of consciousness and agency (and to metabolize while doing so, of course). A digital model of a brain would not be getting sensory inputs from a human body, or giving motor outputs to its muscles. Of course I suppose you could come up with analog-to-digital converters that would give it the equivalent of human sensory input, or create a simulated human body in a simulated physical world for it to inhabit.

But I really had in mind two distinct differences. There is the difference between, say, a digital model of a hummingbird and a living bird, which is that one simulates flight and the other actually flies. But there is also the difference between a living bird and an airplane, which is that though both fly a balance of lift, thrust, weight, and drag, they generate the thrust and lift by different mechanisms that give them different capabilities. Even gliding birds don't hold their wings completely rigid, or generate thrust with propellers.
Then I ask you: how would you define a system that has a sound enough input-output behaviour and has an interface appropriate to the environment? I am thinking of Joi again. She doesn’t pretend to be a real person, she is even “aware” to be an AI - that is, the algorithm processes this information as a sentient one would do - and of course she doesn’t has a body. Yet her behaviours perfectly match a human being’s one, and she can interact with real human beings through the interface.
We could even use a speaker as the only interface. We do it every day with phones, and I never wonder if I am speaking with a person through the phone, or with a simulation outputting through the phone speaker.

Quote:
Originally Posted by VonKatzen View Post
Verstehen certainly is a qualitative issue, which I consider true intelligence in general to be. However I think that the entire physical domain has qualitative factors, and part of the mistake of Science! is to look at everything through a statistical lens, i.e. 'math is the language of the Universe' - obviously electrons are much more than math, as math has no electrical charge. As to whether they can be solved, I think in principle yes (that is, it is a logically consistent phenomena which could potentially be described in coherent terms) though whether it will be in practice is another matter. And this is my main complaint with computer AI, it treats consciousness as a math problem, when it's really an engineering problem - and engineering deals with qualitative problems as well as quantitative ones.

And I do not believe that a 'simulation of a mind' can really be functional without some treatment of these qualitative factors. Actual logical comprehension is required to understand and solve problems, and the only reason computers are even useful is because you have a conscious being to interpret what amounts to micro-lightning into meaningful results. Otherwise they don't even really do math - they just jiggle atoms. A computer is only as useful as its user.

Computers can be arranged so that they automatically perform tasks which human beings would be required to otherwise, but they're not doing anything intentional or analytic. Rather the intention and analysis puts them in place. All they are is a set of 'gears', and are no more 'solving problems' or 'learning' than a clock is 'keeping time'. As I've said before, this is not to say that AI can't be built, just that it needs to be built as a thinking-machine and is not merely an extension of computation. Computation can be done by beer cans hanging from strings, but you still need a designer to set them up and an end user to interpret the results.

I think that a true AI (which is not merely a brain-in-a-jar) would be a good analogue to the bird v. airplane. A thinking machine could and probably would be as different from human minds as birds are from airplanes - still following the same basic physical and logical laws, but employing very different mechanics to get to the result. And, just like the airplane, the artificial AI may be vastly more impressive in some dimensions than the biological mind (if not quite as deft in some scales). This is where the 'alien intelligence' factor comes in. An AI might be able to figure out where all the nukes on the planet are hidden in ten hours of analysis, but still not understand why you want to take a girl on a date. You might ask it to build you a hotel and wind up with a labyrinth no one can navigate but which does nonetheless make a certain amount of sense regarding construction costs and safety. Without the evolutionary and social conditioning (which depend so much on our mental hardware) it may be very hard to impress on it considerations we simply take for granted, even if it can do things that we never could. Assuming we had a relatively controllable AI you may well require a lot of human oversight still just to get the results you want - which has its own dangers: the AI might make decisions that are beyond your IQ, and when you 'edit' its results you end up with a disaster you're too dull to have seen. Some friends of the family 'redesigned' an engine to improve performance and it caught fire. The original engineer obviously knew something they hadn't even taken into account.

Philosophical wankery aside, this poses some problems from a role-playing perspective. It's hard to play a character that's significantly smarter than you, but what about a character that's significantly smarter than you, completely amoral, and doesn't know or care what a 'family' even is? Realistic AI is like a realistic alien: nobody really knows what it would be like, but it would be rather surprising if it was like Clark Kent.

If all you want AI to be is a computer-generated personal assistant (like Star Trek) then you don't need to worry about it. But if you're trying to emulate it in a believable way it becomes extremely difficult.
These are very good arguments. Of course, science as the collective effort to understand phenomena is a different concept than scientific method. There is a lot of confusion on the topic; sorry if I contributed to the issue. I was referring to scientific method when I talked about the dominion of science.

So I can say: we totally can study qualitative phenomena. Yet, brain physiology and functioning are currently approached with scientific method, by means of mathematical relations between measures. We totally need further steps in epistemology before we’ll be able to directly relate brain and mind, because - following the argument of John Lucas - our mind is capable of actions that cannot be reduced (in the technical sense) to computation.

I agree that we could discover, in our future, some physical aspect of brain functioning that cannot be reduced to computation. I am thinking of intra-neuron elaboration, relations between quantum mechanics and chirality, and more so. Still, we can at least hypotize that it’s possible to create a “good enough” simulation of a sentient mind by means of computation. We don’t really know; we have to choose in our sci-fi speculations. The idea of perfect simulation of a mind has a sound logic, and the whole argument is very appealing: an artificial mind, behaving as a true person, and no sentient being knows if it’s alive or it just seem so.

By the way it’s the main theme of an old GURPS adventure: “Loving the Deads”.

Last edited by Ji ji; 03-17-2018 at 05:56 PM.
Ji ji is offline   Reply With Quote
Old 03-17-2018, 06:05 PM   #66
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
Then I ask you: how would you define a system that has a sound enough input-output behaviour and has an interface appropriate to the environment? I am thinking of Joi again. She doesn’t pretend to be a real person, she is even “aware” to be an AI - that is, the algorithm processes this information as a sentient one would do - and of course she doesn’t has a body. Yet her behaviours perfectly match a human being’s one, and she can interact with real human beings through the interface.
We could even use a speaker as the only interface. We do it every day with phones, and I never wonder if I am speaking with a person through the phone, or with a simulation outputting through the phone speaker. Again, the ontological issue of causality (or emergency, or superveniency, etc.) is about the algorithm causing vs simulating a mind
I don't know the source material; I haven't seen the film in question. So I can't really give an informed answer.

My basic view is that the Searle Chinese room argument, and the Turing test, are invalid as statements of what it is to be "human." They seem to focus entirely on language. I don't think that typed or written or even spoken inputs and outputs are sufficient for humanity or for sapience.

Consider the Turing machine. Can I say to it, "How did you like the chicken masala we had for dinner?" or "doesn't that singer have a great voice?" or "beautiful weather, isn't it?" Can it comment on the color of my eyes, or whether the room is too hot or too cold? A human being, if not disabled in specific ways, would be able to perceive and think about a vast range of things of that sort, and discuss them, without having been prepared to do so in advance; the discussion would be based on their awareness of the world and of their existence in it, their embodiment, and their language would be a way of expressing and focusing that awareness. For example, C can say to me, "He wants your attention," and I can look around and see that our cat has come up to where I'm sitting and flopped onto his back on the floor next to me—and a purely symbol manipulating engine, even if it could notionally pass the Turing test or do the Searle trick, could not do such things. (Setting aside the idea that its inputs come from a simulated human body in a simulated physical world.)

In other words, as a human being, and more broadly a sapient one, I have intentionality: I can direct my awareness to physical entities, and use language to refer to them. My language is not just a self-contained system of symbols: It contains words that refer to the world, such as "I," "you," "they," "here," "there," "now," "then," and "thus," and such words can guide another person's attention to a common feature of the environment. This is a big part of the primary use of language, which is face to face communication. Exchanging messages over the Internet, or by teletype, as Turing imagined, is a specialized secondary use of language.

And I can tell that another person, or a nonhuman animal without speech, is conscious by seeing them move their body in a way that directs their attention to stimuli that provide interesting information. Which also points at the kind of things I would take as providing evidence that a machine was conscious.

I hope this is some help.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 06:38 PM   #67
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
So I can say: we totally can study qualitative phenomena. Yet, brain physiology and functioning are currently approached with scientific method, by means of mathematical relations between measures. We totally need further steps in epistemology before we’ll be able to directly relate brain and mind, because - following the argument of John Lucas - our mind is capable of actions that cannot be reduced (in the technical sense) to computation.
Well, on one hand, the brain does things that cannot be reduced to computation, too. It's an important source of chemical substances that influence human physiology and behavior, and it has intimate relations with the pituitary, which is the central performer in the human endocrine system.

It also outputs electrical signals to the muscles and such that don't take a binary on/off form. In fact no neurons do binary on/off; they do pulses with different frequencies. Of course you can model this mathematically, but then you can model the stresses and flows in the Earth's crust mathematically, and that doesn't mean that the Earth is a huge computer; it means that mathematics is science's characteristic tool.

I also think that talking about the relationship between the mind and the brain is a misleading phrasing. It's like talking about the relationship between the legs and running, as if running were a separate entity that somehow entered into an interaction with the legs.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 06:38 PM   #68
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
My basic view is that the Searle Chinese room argument, and the Turing test, are invalid as statements of what it is to be "human." Consider the Turing machine. Can I say to it, "How did you like the chicken masala we had for dinner?" or "doesn't that singer have a great voice?" or "beautiful weather, isn't it?" Can it comment on the color of my eyes, or whether the room is too hot or too cold?
Assuming you mean Turing test rather than Turing machine, you can't ask the remote end of the test those questions (whether or not it's a machine or a human) because you aren't in the same location and those questions are dependent on a shared location (though it would be legit to ask about a video of same). In a broader sense, the Turing Test is basically a claim that acting intelligent and being intelligent are the same thing.
__________________
My GURPS site and Blog.
Anthony is online now   Reply With Quote
Old 03-17-2018, 06:45 PM   #69
Ulzgoroth
 
Join Date: Jul 2008
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
Consider the Turing machine. Can I say to it, "How did you like the chicken masala we had for dinner?" or "doesn't that singer have a great voice?" or "beautiful weather, isn't it?" Can it comment on the color of my eyes, or whether the room is too hot or too cold? A human being, if not disabled in specific ways, would be able to perceive and think about a vast range of things of that sort, and discuss them, without having been prepared to do so in advance; the discussion would be based on their awareness of the world and of their existence in it, their embodiment, and their language would be a way of expressing and focusing that awareness. For example, C can say to me, "He wants your attention," and I can look around and see that our cat has come up to where I'm sitting and flopped onto his back on the floor next to me—and a purely symbol manipulating engine, even if it could notionally pass the Turing test or do the Searle trick, could not do such things. (Setting aside the idea that its inputs come from a simulated human body in a simulated physical world.)
The bold seems to be a seriously unfair interpretation against the machine. A human being has been prepared for all of those things in advance. They've gathered the information and, for relatively common questions like you mention, probably has acquired some specific cognitive frameworks to apply. For the temperature, they've been prepared for that over their entire evolutionary history!

An 'AI' that has to be given advance preparation for the specific question you're going to ask out of that list would, of course, not be performing at the level you expect of a human. But having been given advance preparation for all of them and using the appropriate parts...that would only be fair.

(You also seem to be drawing a bit of a bead on physical senses, which seems a bit odd since those aren't all that hard for computers. They're not as conveniently descretized as language so as to be convenient fodder for certain types of philosophy, and some of the processing hasn't been solved yet, but a modern computer with the right peripherals could sense the temperature of the room or the color of your eyes just fine.)
__________________
I don't know any 3e, so there is no chance that I am talking about 3e rules by accident.
Ulzgoroth is offline   Reply With Quote
Old 03-17-2018, 07:21 PM   #70
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ulzgoroth View Post
You also seem to be drawing a bit of a bead on physical senses, which seems a bit odd since those aren't all that hard for computers. They're not as conveniently descretized as language so as to be convenient fodder for certain types of philosophy, and some of the processing hasn't been solved yet, but a modern computer with the right peripherals could sense the temperature of the room or the color of your eyes just fine.
Sure, and if you can set one up to do that, I'd be prepared to agree that it was sapient. But the man in the Chinese Room has no sensory access to anything but little pieces of paper. And Turing stipulated no capabilities for passing the Turing test other than being able to exchange messages by teletype. I'm not making any claims about what computer can or cannot do in principle; I'm only saying that if they don't have semantics—the ability to relate information to the physical world, or intentionality—but only syntax—the ability to manipulate symbol strings—then they aren't sapient, any more than a programmable calculator is when it tells me that 2^10=1024.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 02:26 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.