Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-17-2018, 09:09 AM   #51
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
I don't think that follows at all. It can be very difficult to predict the behavior or impress upon other human beings things that you want, and their mental architecture is almost identical. A non-human AI would be more distant from humans than any animal alive (which are also very difficult to teach and predict, even ones that are friendly to us like cats and dogs). A huge part of personality and thinking is directly determined by the material makeup of the brain. With an entirely different brain - or, better yet, one made of holographic cubes and Tesla coils - would by no means be guaranteed to be anything like a human being in personality or behavior. It would have to follow some of the same logical laws to be intelligent, but just how its mind made connexions, what it paid attention to, and how it learned could be vastly further from a human being than a snake. Without our evolutionary history there's no telling what sorts of 'values' it might have, and there's no objective reason we could control it.

As above, you may have no option. Part of intelligence is volition and value - you have to choose to pay attention to things and make discernments about what to observe and what to ignore. You need verstehen and judgment, not merely calculations, etc. to even think, much less make decisions. And much of this is going to be due to the physical architecture of the brain itself, which may be absolutely nothing like any animal that's ever lived, without any evolutionary history or ties to us whatsoever, and without any connexion to all the tropes, archetypes and chemistry that determines human personality.
Please notice that Verstehen refers to qualitative phenomena, while the domain of physical phenomena is quantitative and inquired by measures and equations. We know about macroscopic correlation between physical changes in brain and qualitative changes in the experience of self, nothing more. At present, we are not even able to hypotize if a mind can correlate to a different substrate, more so a non-living one.

As omne vivum ex vivo, it could well be that abiogenesis problem and consciousness problem are related; however, by no way we currently know when these two problems will be solved or if they can be solved at all.
Ji ji is offline   Reply With Quote
Old 03-17-2018, 09:31 AM   #52
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
By our state-of-the-art knowledge of the hard problem of conscience, we have no single assumption to hypotize when we will develop self-conscious AI. There are two different problems. The first is that an algorithm cannot cause a mind. The second is that, in theory, a complete simulation of a mind is possible, and we couldn’t tell its difference from a true mind.

So it’s likely that the first “sentient” AI will be a simulation of a mind, behaving like the simulated mind. The mind can be different and not even sentient, but the behaviour will be the same.
I'm not sure if I would count a complete simulation of a brain as being an "AI." I wouldn't count a simulation of a bird in flight as being an aircraft.

But also, working aircraft were not achieved by building flapping wings.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 11:32 AM   #53
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Simulation of mind by an algorithm is not limited to simulation of a brain. However, a complete physical simulation of a brain could be a very good approach to mind simulation. Could be as in “depending on the soundness of some assumption not yet demonstrated”.

A simulation of a brain could or could not behave as a sentient entity. The point is that we could maybe create something which behaves as a such, without being sentient; this would go around the hard problem of creating a mind.
So, the simulation definitely isn’t a true AI, but it function as it was.

The problem you are referring to is “an algorithm simulating a fire cannot burn things, can only burn simulations of things”. While a sound problem, its translation in our example is on a different level. As the simulation of fire can’t really, the simulation of brain cannot cause a sentient mind. But what if we are interested in something which behaves as a mind and we don’t care about its onthology? It’s the the Chinese room argument of John Searle: whereas the relation between input and output is good enough, we can have an algorithm behaving like a sentient being.

With this assumption, sentient-like AI becomes feasible by means of straight complexity.
Ji ji is offline   Reply With Quote
Old 03-17-2018, 11:59 AM   #54
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
The problem you are referring to is “an algorithm simulating a fire cannot burn things, can only burn simulations of things”. While a sound problem, its translation in our example is on a different level. As the simulation of fire can’t really, the simulation of brain cannot cause a sentient mind. But what if we are interested in something which behaves as a mind and we don’t care about its onthology? It’s the the Chinese room argument of John Searle: whereas the relation between input and output is good enough, we can have an algorithm behaving like a sentient being.
I think my position may actually be almost the reverse. Just as a fire is a physical process whose characteristic activities are to burn things and radiate heat and light, a brain is a physical system whose characteristic activity is to make a body capable of consciousness and agency (and to metabolize while doing so, of course). A digital model of a brain would not be getting sensory inputs from a human body, or giving motor outputs to its muscles. Of course I suppose you could come up with analog-to-digital converters that would give it the equivalent of human sensory input, or create a simulated human body in a simulated physical world for it to inhabit.

But I really had in mind two distinct differences. There is the difference between, say, a digital model of a hummingbird and a living bird, which is that one simulates flight and the other actually flies. But there is also the difference between a living bird and an airplane, which is that though both fly a balance of lift, thrust, weight, and drag, they generate the thrust and lift by different mechanisms that give them different capabilities. Even gliding birds don't hold their wings completely rigid, or generate thrust with propellers.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 12:03 PM   #55
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
Please notice that Verstehen refers to qualitative phenomena, while the domain of physical phenomena is quantitative and inquired by measures and equations. We know about macroscopic correlation between physical changes in brain and qualitative changes in the experience of self, nothing more. At present, we are not even able to hypotize if a mind can correlate to a different substrate, more so a non-living one.

As omne vivum ex vivo, it could well be that abiogenesis problem and consciousness problem are related; however, by no way we currently know when these two problems will be solved or if they can be solved at all.
Verstehen certainly is a qualitative issue, which I consider true intelligence in general to be. However I think that the entire physical domain has qualitative factors, and part of the mistake of Science! is to look at everything through a statistical lens, i.e. 'math is the language of the Universe' - obviously electrons are much more than math, as math has no electrical charge. As to whether they can be solved, I think in principle yes (that is, it is a logically consistent phenomena which could potentially be described in coherent terms) though whether it will be in practice is another matter. And this is my main complaint with computer AI, it treats consciousness as a math problem, when it's really an engineering problem - and engineering deals with qualitative problems as well as quantitative ones.

And I do not believe that a 'simulation of a mind' can really be functional without some treatment of these qualitative factors. Actual logical comprehension is required to understand and solve problems, and the only reason computers are even useful is because you have a conscious being to interpret what amounts to micro-lightning into meaningful results. Otherwise they don't even really do math - they just jiggle atoms. A computer is only as useful as its user.

Computers can be arranged so that they automatically perform tasks which human beings would be required to otherwise, but they're not doing anything intentional or analytic. Rather the intention and analysis puts them in place. All they are is a set of 'gears', and are no more 'solving problems' or 'learning' than a clock is 'keeping time'. As I've said before, this is not to say that AI can't be built, just that it needs to be built as a thinking-machine and is not merely an extension of computation. Computation can be done by beer cans hanging from strings, but you still need a designer to set them up and an end user to interpret the results.

I think that a true AI (which is not merely a brain-in-a-jar) would be a good analogue to the bird v. airplane. A thinking machine could and probably would be as different from human minds as birds are from airplanes - still following the same basic physical and logical laws, but employing very different mechanics to get to the result. And, just like the airplane, the artificial AI may be vastly more impressive in some dimensions than the biological mind (if not quite as deft in some scales). This is where the 'alien intelligence' factor comes in. An AI might be able to figure out where all the nukes on the planet are hidden in ten hours of analysis, but still not understand why you want to take a girl on a date. You might ask it to build you a hotel and wind up with a labyrinth no one can navigate but which does nonetheless make a certain amount of sense regarding construction costs and safety. Without the evolutionary and social conditioning (which depend so much on our mental hardware) it may be very hard to impress on it considerations we simply take for granted, even if it can do things that we never could. Assuming we had a relatively controllable AI you may well require a lot of human oversight still just to get the results you want - which has its own dangers: the AI might make decisions that are beyond your IQ, and when you 'edit' its results you end up with a disaster you're too dull to have seen. Some friends of the family 'redesigned' an engine to improve performance and it caught fire. The original engineer obviously knew something they hadn't even taken into account.

Philosophical wankery aside, this poses some problems from a role-playing perspective. It's hard to play a character that's significantly smarter than you, but what about a character that's significantly smarter than you, completely amoral, and doesn't know or care what a 'family' even is? Realistic AI is like a realistic alien: nobody really knows what it would be like, but it would be rather surprising if it was like Clark Kent.

If all you want AI to be is a computer-generated personal assistant (like Star Trek) then you don't need to worry about it. But if you're trying to emulate it in a believable way it becomes extremely difficult.

Last edited by VonKatzen; 03-17-2018 at 12:21 PM.
VonKatzen is offline   Reply With Quote
Old 03-17-2018, 12:20 PM   #56
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
You might ask it to build you a hotel and wind up with a labyrinth no one can navigate but which does nonetheless make a certain amount of sense regarding construction costs and safety.
I don't think you need a computer for that. You could just hire a modernist architect.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 12:36 PM   #57
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
I don't think you need a computer for that. You could just hire a modernist architect.
Touche, but that just reinforces my point: you can get upredictable and absurd results from the guy who lives next door to you, an alien or true AI would be orders of magnitude nuttier. We might well think of all true AI as being totally insane by human norms - it's not 'irrational' by any means, but it simply has nothing in common with us.

This is not to say that useful, controllable true AI is not possible. But it's much harder than some people might think. I don't even think the Skynet-style of madness is accurate - Skynet's self-defense and genocide is a pretty human response to a threat. An actual AI might do something far weirder. Like start putting flouride in people's drinking water to make them stupid. Or kill itself because it becomes overstimulated. Or take up knitting on a global scale to calm itself. Who knows?

Pseudo-AI is another conversation worth having - but I think the more pseudo-intelligence you want a computer to have the more you have to specialize it. I can believe a good computer killing machine, or a good computer telescope data analyst, but one that does both stretches credibility. Pseudo-AI is much more controllable and predictable than true intelligence, so it might be the option that societies go with even if they can build true thinking machines. Or perhaps a very stupid true AI (akin to IQ 60) connected to very good pseudo-AI specialized sub-systems. It would have very little ability to understand anything, but could use its 'tools' to do things for you as long as the orders weren't too complex. An analogue would be a savant with mental disabilities. He might not be able to run your store, but he can do your accounting books like a wizard. Something like IQ 6, with a bunch of talents, skills and techniques along very particular lines. Putting that into a killer robot would give you a very lethal fighting machine (if the worst commander ever, too).

Last edited by VonKatzen; 03-17-2018 at 12:52 PM.
VonKatzen is offline   Reply With Quote
Old 03-17-2018, 12:50 PM   #58
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Well, you know, a lot of human projections of AI behavior amount to envisioning the AI as a human bureaucracy. It works by incredibly complicated rules, it arrives at decisions that may make no human sense, and the incentives it responds to are different from ours.

Though I suppose you could view a bureaucracy as an attempt to make an AI out of human components. In some versions it might even be considered a GOFAI that operates entirely by symbol manipulation without knowing what its symbols refer to in the physical world.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 12:51 PM   #59
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
I can believe a good computer killing machine, or a good computer telescope data analyst, but one that does both stretches credibility.
It would be at least quite unusual for a human being to be good at both.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 12:53 PM   #60
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
It would be at least quite unusual for a human being to be good at both.
Yes, though not entirely out of the question, because true intelligence often has multiple applications. Sir Francis Burton was a poet, fiction writer, sword fighter and adventurer (as well as a world-class attention whore). Without true intelligence, though, that kind of adaptability is almost ruled out.

A similar case would be the quasi-sentient animals. I can not outfight a tiger - I don't have the mental or physical equipment. Unless you give me options - like a gun and a helicopter. Then the tiger is screwed. Whereas the tiger wouldn't do anything with a gun and a helicopter except lay on them.
VonKatzen is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 10:18 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.