Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-18-2018, 11:57 PM   #91
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Touche, but that just reinforces my point: you can get upredictable and absurd results from the guy who lives next door to you, an alien or true AI would be orders of magnitude nuttier. We might well think of all true AI as being totally insane by human norms - it's not 'irrational' by any means, but it simply has nothing in common with us.
One theme of SF is the 'alien aliens' who really, seriously think differently than we do. This is not the most common form, because it's harder to tell a good story about them, or about humans interacting with them, but some have done it well.

But we should keep in mind that some humans think really differently than the majority of the species, too. Sometimes that is useful or beneficial, but often it's simply perceived as irrational madness. From which we can draw a disturbing but possibly accurate implication: that a really mentally alien alien might come across, to us, as a race of lunatics. They might perceive us similarly, of course, which wouldn't necessarily help.

Quote:

This is not to say that useful, controllable true AI is not possible. But it's much harder than some people might think. I don't even think the Skynet-style of madness is accurate - Skynet's self-defense and genocide is a pretty human response to a threat. An actual AI might do something far weirder. Like start putting flouride in people's drinking water to make them stupid. Or kill itself because it becomes overstimulated. Or take up knitting on a global scale to calm itself. Who knows?
Or it might do something almost comprehensible, or comprehensible in a way that offends our moral sense.

For a contrast to Skynet, consider its inspiration: Colossus, from the Forbin Project novels and movie. In both versions, an AI in control of the American and Soviet nuclear arsenals takes dictatorial control of the world.

Colossus, though, isn't Skynet, it's actually sort of benevolent. It's acting in what it logically perceives to be the long-term interest of the human race as a whole, in icily cold, pure-rationalist terms. Which means it sometimes does things that humans would consider profoundly immoral and evil because they are the most efficient way to achieve its goal. It doesn't bother Colossus a bit to blow up a million people if that's the best way to send a message to the rest of the world, or arrange rapes as a psychology experiment, or any number of likewise ruthless actions.

But Colossus is still a very good SFnal example of why you don't want free-willed machines.

Another is Jack Williamson's 'Humanoids', a race of robots that are definitely and solidly programmed to Do Good for humans...but who have perspective problems with the definition of 'Good'. In that case, Williamson was consciously writing from the POV of 'debunking' the Three Laws of Robotics.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is offline   Reply With Quote
Old 03-19-2018, 12:06 AM   #92
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Daigoro View Post
Would you want an auto-drive vehicle that prevents you from pulling in front of a speeding truck or driving off a cliff?
Depends on just how much freedom of action, independent of my own wishes, it has, and how reliable its ability to perceive such threats is. The answer is not an easy 'yes'.

Here again, the primary responsibility for not pulling in front of that truck, or driving off that cliff, is the driver's, not the car's. It would probably make better sense to design the car to make sure that the driver is aware of the threat.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is offline   Reply With Quote
Old 03-19-2018, 12:21 AM   #93
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Daigoro View Post
Would you want an auto-drive vehicle that prevents you from pulling in front of a speeding truck or driving off a cliff?
Well, since I don't drive, my choice would be a human driver or a self-driving vehicle, and I certainly would not want either of those to drive off a cliff. Even if I wanted to commit suicide, jumping wouldn't be the method I'd choose.

But I wouldn't want one that would refuse to take me to, say, a conservative political event or a liquor store.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is offline   Reply With Quote
Old 03-19-2018, 12:27 AM   #94
Daigoro
 
Daigoro's Avatar
 
Join Date: Dec 2006
Location: Meifumado
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Johnny1A.2 View Post
Here again, the primary responsibility for not pulling in front of that truck, or driving off that cliff, is the driver's, not the car's. It would probably make better sense to design the car to make sure that the driver is aware of the threat.
I'm not sure where this line of argument is going, but I would dispute this particular statement.

The responsibility is shared between quite a few stakeholders- society, its legal system, car manufacturers upholding their reputation of semi-autonomous cars not being involved in accidents, truck drivers and other road users who don't want to be involved in accidents that might be caused by a particular driver's moment of suicidal ideation, a litigation system which allows hefty monetary punishments against manufacturers deemed to be responsible for not preventing preventable accidents, and so on.

The world is already full of little impediments to a normal person enacting any particular act of free will for the sake of their own and others' safety or interests. I'm sure your light switch, mentioned above, is connected to a circuit breaker, for example.
__________________
Collaborative Settings:
Cyberpunk: Duopoly Nation
Space Opera: Behind the King's Eclipse
And heaps of forum collabs, 30+ and counting!
Daigoro is offline   Reply With Quote
Old 03-19-2018, 12:39 AM   #95
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Daigoro View Post
The world is already full of little impediments to a normal person enacting any particular act of free will for the sake of their own and others' safety or interests. I'm sure your light switch, mentioned above, is connected to a circuit breaker, for example.
This is entirely a matter of legal custom and cultural preference, however. There is no reason that it should be so, in an objective sense. Many historical and present societies, for example, have no regulations whatsoever related to building codes, food quality, or the discharging of firearms in city limits. Indeed, many people would question whether the term 'stakeholder' means anything at all, other than conventions of assigning control and responsibility. Almost anything people take for granted in the realm of customary and statuatory law has been precisely reversed somewhere else at some time. Some places made the murder of certain people mandatory. Certain places consider buyers of prohibited goods responsible, other places sellers. Other places they're not even illegal!

This slides into entire realms of philosophies and values. Some people don't care whether something affects other people and the government doesn't either. I would tend to take the view that almost everything relating to laws, or customs or values is in fact a subjective preference and exists only due to social custom. Almost any of them could imaginably be altered or abolished entirely, not merely by individual abstention but by statute as well. It's quite easy to imagine a pro-activily social darwinist hyper-technical state where armed robbery is legal to keep down population numbers and undesireable weaklings who grow fat and useless in their life of ease, etc. In such a society building or vehicle safety may be entirely a matter of 'buyer beware': if you're too stupid and lazy to make sure something is safe before you use it, good riddance!

A powerful AI may take just this view, too! Rather than trying to rule the world it destroys all the support systems and customs that keep the 'unfit herd animals' around!

And there are infinite variations on this. There's really no telling just how any society or mind might function, because it's all just a matter of perspective and judgment, not objective facts or laws of nature. Anthony de Jasay once pointed this out, that any fact used as an argument for a political policy could easily be used to support the opposite. If you don't agree on premises you won't get anywhere on conclusions.

Last edited by VonKatzen; 03-19-2018 at 12:48 AM.
VonKatzen is offline   Reply With Quote
Old 03-19-2018, 12:46 AM   #96
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Johnny1A.2 View Post
Here again, the primary responsibility for not pulling in front of that truck, or driving off that cliff, is the driver's, not the car's.
"I don't want to do X, so I will buy a system that prevents me from doing X" is in fact taking responsibility.
__________________
My GURPS site and Blog.
Anthony is online now   Reply With Quote
Old 03-19-2018, 11:58 AM   #97
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
That's a persuasive bit of evidence against dualism, though perhaps not a conclusive one. But it seems that it's not "simulating" the brain; rather it's using the physical activity of the brain to detect and interpret the information that's flowing through it.

And it doesn't seem as if it's matching either the input of the brain or the output, let alone those of a whole human being. The blood flow through the brain is neither input nor output; it's simply a side effect of neural activity. And the image is neither input nor output; it's somewhere in between them. After all, as you describe it, the person being scanned is neither seeing the image (input) nor drawing it (output). So I don't think this is evidential for what you're claiming.
There is no dualism involved, being an epistemologic issue.
Brain characteristics are measurable, so they can be inquired by scientific method.
Consciousness is not measurable, so it cannot be inquired by scientific method and other tools are required.

A brain characteristics can be translated in a formal system.
At least one mental capacity can’t be translated in a formal system, as our mind can decide on propositions that would cause a halt problem in a formal system.

Again I want to synthetize my argument: an algorithm cannot cause a mind, as well as it can’t burn things, yet it can simulate the behaviour by brute force.
Ji ji is offline   Reply With Quote
Old 03-19-2018, 12:26 PM   #98
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
Consciousness is not measurable, so it cannot be inquired by scientific method and other tools are required.
That sort of depends on what you mean by consciousness and measurable. Things like the Glasgow Coma Scale do make attempts to measure consciousness, though it wouldn't be particularly challenging to get a computer up to 12 on that scale.
__________________
My GURPS site and Blog.
Anthony is online now   Reply With Quote
Old 03-19-2018, 12:52 PM   #99
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

The measure defines the scientific method and is defined by the tools of measurement.

The Glasgow scale measures behaviours. In fact, a computer program can legitimely prove “conscious” if tested with the scale.
Ji ji is offline   Reply With Quote
Old 03-19-2018, 01:05 PM   #100
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
The measure defines the scientific method and is defined by the tools of measurement.

The Glasgow scale measures behaviours.
The overwhelming majority of scientific measurements use indirect methods. Classically this involved having some quantity produce a change in length: the displacement of weights along the arm of a steelyard, the height of mercury in a thermometer, the movement of a galvanometer needle along a curve. In modern instruments you typically get electric voltage produced or changed by a chemical reaction, thermoelectricity, photoelectricity, piezoelectricity, or other processes. You're inferring the existence and level of some variable physical thing from some other physical thing.

I'd also say that there are ways to measure consciousness AS consciousness. For example, the measure of how much value something has to a person is the most desirable thing they will give up to get it or avoid losing it. That's an ordinal rather than a cardinal scale, but ordinal scales are part of the theory of measurement.
__________________
Bill Stoddard

I don't think we're in Oz any more.

Last edited by whswhs; 03-19-2018 at 01:41 PM.
whswhs is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 10:18 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.