Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-16-2018, 07:30 PM   #41
Boomerang
 
Boomerang's Avatar
 
Join Date: Feb 2016
Location: Melbourne, Australia (also known as zone Brisbane)
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen
I was mainly going at computer AI. AI may be entirely realistic built on other principles, I just don't think spinning plates and gears are the way to do it. If I thought AI was realistic through simple improvements in computer technology, and I thought computer technology was heading in that direction, then I'd be inclined to include it in a hard science fiction game. But in either case I believe it would be a very alien intelligence, unless it was simply a brain in a jar.
I agree that in the real world AI will not result from simply increasing the performance of current computer architecture. From a game point of view though it is better to at least start with technology that most people have some understanding of. So for Sci Fi gurps games I would tend to stick with standard computer based AI so that my players have some idea of what I am talking about. Wholly alien intelligences are best reserved for rare, enigmatic and inscrutable NPCs.
__________________
The stick you just can't throw away.
Boomerang is offline   Reply With Quote
Old 03-16-2018, 11:28 PM   #42
Flyndaran
Untagged
 
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
Default Re: No AI/No Supercomputers: Complexity Limits?

The A.I's would be made by humans. That alone would mean they'd mimic human behavior and problem solving if not quite in the same way as we do.

Of course no sane person would give their untested program uncontrollable emotions or "free will" to ignore orders.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check.
Flyndaran is offline   Reply With Quote
Old 03-16-2018, 11:51 PM   #43
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Flyndaran View Post
Of course no sane person would give their untested program uncontrollable emotions or "free will" to ignore orders.
Well, that depends. My cat has both of those, and he's not much of a problem. Of course if he were a tiger I'd be much more concerned.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 12:04 AM   #44
RyanW
 
RyanW's Avatar
 
Join Date: Sep 2004
Location: Southeast NC
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
Of course if he were a tiger I'd be much more concerned.
If he were a tiger, I imagine your ability to limit his freewill would be diminished more than your desire to do so was increased.
__________________
RyanW
- Actually one normal sized guy in three tiny trenchcoats.
RyanW is offline   Reply With Quote
Old 03-17-2018, 12:52 AM   #45
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Flyndaran View Post
The A.I's would be made by humans. That alone would mean they'd mimic human behavior...
I don't think that follows at all. It can be very difficult to predict the behavior or impress upon other human beings things that you want, and their mental architecture is almost identical. A non-human AI would be more distant from humans than any animal alive (which are also very difficult to teach and predict, even ones that are friendly to us like cats and dogs). A huge part of personality and thinking is directly determined by the material makeup of the brain. With an entirely different brain - or, better yet, one made of holographic cubes and Tesla coils - would by no means be guaranteed to be anything like a human being in personality or behavior. It would have to follow some of the same logical laws to be intelligent, but just how its mind made connexions, what it paid attention to, and how it learned could be vastly further from a human being than a snake. Without our evolutionary history there's no telling what sorts of 'values' it might have, and there's no objective reason we could control it.
Quote:
Of course no sane person would give their untested program uncontrollable emotions or "free will" to ignore orders.
As above, you may have no option. Part of intelligence is volition and value - you have to choose to pay attention to things and make discernments about what to observe and what to ignore. You need verstehen and judgment, not merely calculations, etc. to even think, much less make decisions. And much of this is going to be due to the physical architecture of the brain itself, which may be absolutely nothing like any animal that's ever lived, without any evolutionary history or ties to us whatsoever, and without any connexion to all the tropes, archetypes and chemistry that determines human personality.

Frankly, 'free will' is a misnomer. Will, intentionality, is determined by the values and perceptions of the individual. If it weren't it wouldn't be your will. And those are ultimately derived from the material substrate. Intelligence and will cannot be separated, and controlling a mind is not so easy as programming a computer (which often goes haywire, as well). Frankly, given the level of detailed unobservable, uncontrollable connexions involved in minds (unobservable because you'd have to destroy it to look at it, and uncontrollable because you can't get the resolution of atomic manipulation to alter them reliably) means that it's very likely it's impossible to read minds and even knowing what a true AI is or would be thinking (telepathy is also BS for the same reason - there is no localized entity known as a 'thought'). If the AI is also more intelligent than people then you basically have to depend on luck and an off switch to make sure it doesn't enslave you. Of course, it may have no interest in such a thing - human desires stem directly from our evolutionary history and social conditioning. Something which an artificial intelligence would share none of, and couldn't possibly (unless it was simply a human brain in a jar).

Basically no one knows anything about what AI would be like, and it's foolish to pretend to when nobody really knows how human brains even work. AI personality is fair game, but the most unrealistic options are 'computers' and 'human like', which are by far the most popular in science fiction. However, just like human-like aliens, this is to make things easy on the author and the reader, and not because it's remotely likely.

Last edited by VonKatzen; 03-17-2018 at 01:09 AM.
VonKatzen is offline   Reply With Quote
Old 03-17-2018, 01:37 AM   #46
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Flyndaran View Post
The A.I's would be made by humans. That alone would mean they'd mimic human behavior and problem solving if not quite in the same way as we do.
That's only true if the goal is to mimic human behavior and problem solving. Otherwise how it behaves is determined by what is practical to accomplish and what is helpful for whatever goals we're creating AI for.
Quote:
Originally Posted by Flyndaran View Post
Of course no sane person would give their untested program uncontrollable emotions or "free will" to ignore orders.
Why not? Just don't put it in control of things where it exercising free will is problematic until you're sufficiently confident that its use of free will is a net benefit (there are plenty of times when it's useful to have orders be ignored. It's likely there are a lot of orders your current computer will ignore or at least force you to jump through hoops before doing).
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Old 03-17-2018, 06:27 AM   #47
malloyd
 
Join Date: Jun 2006
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Basically no one knows anything about what AI would be like, and it's foolish to pretend to when nobody really knows how human brains even work. AI personality is fair game, but the most unrealistic options are 'computers' and 'human like', which are by far the most popular in science fiction.
No it's actually quite reasonable because the *goal* of AI designers is probably "human like". An evolved intelligence running on a computer might be quite alien (or not, given that lack of good definition of intelligence), but one built on purpose is only as alien as the designers want it to be. We don't know how to do that now, but we don't know how to build an alien AI either. If you understand intelligence well enough to build it from the ground up, you probably understand it well enough to get the details to work the way you want them too.
__________________
--
MA Lloyd
malloyd is online now   Reply With Quote
Old 03-17-2018, 08:15 AM   #48
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by malloyd View Post
No it's actually quite reasonable because the *goal* of AI designers is probably "human like". An evolved intelligence running on a computer might be quite alien (or not, given that lack of good definition of intelligence), but one built on purpose is only as alien as the designers want it to be. We don't know how to do that now, but we don't know how to build an alien AI either. If you understand intelligence well enough to build it from the ground up, you probably understand it well enough to get the details to work the way you want them too.
You might have to go through an intermediate stage where you have figured out how to gain human-level sapience (awareness of the physical world, rational thought, volition, and at least some capacity to act and communicate) but have not figured out how to tune it to pursue particular goals or to display a particular set of "emotional" responses. In that case whether it was humanlike might be a crapshoot.

Of course, if I were writing a horror story, I could assume that (a) the builders have carefully limited the freedom of action of their AI because they know it's a crapshoot, and are prepared to shut it down if it behaves incorrectly, (b) the AI does not share human goals or want to serve them, but (c) the AI has figured out its situation and (d) is smart enough to plausibly fake human goals and sympathies to get what it wants. In fact, now that I think of it, that's pretty close to the plot of Ex Machina.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-17-2018, 08:36 AM   #49
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

By our state-of-the-art knowledge of the hard problem of conscience, we have no single assumption to hypotize when we will develop self-conscious AI. There are two different problems. The first is that an algorithm cannot cause a mind. The second is that, in theory, a complete simulation of a mind is possible, and we couldn’t tell its difference from a true mind.

So it’s likely that the first “sentient” AI will be a simulation of a mind, behaving like the simulated mind. The mind can be different and not even sentient, but the behaviour will be the same.

On these assumptions, we can infer that a “human like” functioning AI is easier to create than an alien one, at least if we are talking about a sentient-like behaviour.

EDIT: I have been positively impressed by Blade Runner 2049 character Joi. She is believable as a complete simulation of a mind. She behaves as a true human and you would be instinctively brought to relate to it as such - possibly even falling in love; but there is no need for her to have real self-awareness, nor we could know if she has or not.

Last edited by Ji ji; 03-17-2018 at 08:42 AM.
Ji ji is offline   Reply With Quote
Old 03-17-2018, 08:40 AM   #50
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Frankly, 'free will' is a misnomer. Will, intentionality, is determined by the values and perceptions of the individual. If it weren't it wouldn't be your will. And those are ultimately derived from the material substrate. Intelligence and will cannot be separated, and controlling a mind is not so easy as programming a computer (which often goes haywire, as well).
I was using "free will" in the vernacular sense, not in the metaphysical one of (metaphysical) libertarianism/indeterminism. For that purpose, I think there is a difference between controlling causes and constituting causes. If, for example, I open up a spreadsheet, and perform a calculation, the numbers that appear in some of the cells result from computations performed inside the machine, which clearly take place through internal causes; those causes constitute the calculation. On the other hand, if someone comes into the room while I'm away, places the cursor in some cell, and types "=" and some number, the number on the screen is then the result of an external cause that controls the display. In the vernacular sense, I think "free will" means that my actions and plans are the result of my thoughts as internal constituting causes. Whatever previous events formed my preferences are not sitting outside of me and manipulating me like a puppet. And that's all that I have in mind when I refer, for example, to my cat having "free will."

Quote:
Frankly, given the level of detailed unobservable, uncontrollable connexions involved in minds (unobservable because you'd have to destroy it to look at it, and uncontrollable because you can't get the resolution of atomic manipulation to alter them reliably) means that it's very likely it's impossible to read minds and even knowing what a true AI is or would be thinking (telepathy is also BS for the same reason - there is no localized entity known as a 'thought').
I think that one's maybe overstated. I can observe my cat, and see that he's heard the birds outside and is interested in them, by the movement of his ears and the turn of his head; I can see, by other signs, that he's thinking of jumping up into my lap. And he's a moderately alien lifeform. I can do this a little better with humans, though I'm not very good at it. I think we can read minds, but (a) with limited resolution and (b) through a specific process based on sensory channels—but both are also true for our ability to locate and identify physical objects and movements. What's a fantasy is not "reading thoughts" but on one hand reading them without sensory input, and on the other being omniscient about them.

And the former could legitimately be the subject of science fiction, assuming either direct information transfer without a physical channel, or an electronically mediated link between two brains. See Kevin Warwick's research as an early venture in the latter direction.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 04:54 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.