Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 08-03-2018, 01:17 PM   #21
Flyndaran
Untagged
 
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Quote:
Originally Posted by AlexanderHowl View Post
There is no proof that non-biological volitional intelligence is capable of existing except through superscience, it is just a hope of computer scientists, science fiction fans, and transhumans that it is not superscience.
State clearly what laws of physics require evolved meat bags to be literally the only way for intelligence to exist.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check.
Flyndaran is offline   Reply With Quote
Old 08-03-2018, 01:54 PM   #22
AlexanderHowl
 
Join Date: Feb 2016
Default Re: Keeping humans relevant in the shadow of TL10 AI.

What laws of nature allow for the evolution of biological intelligence? I am unaware of any specific law of biology, chemistry, or physics that allows for any form of intelligence, but we have proof that biological intelligence occurs because we are discussing the phenomena right now. However, there is no evidence of non-biological intelligence, despite there having been nearly 13 billion years for such an intelligence to have been developed by biological intelligence within the Milky Way Galaxy and to have spread throughout the galaxy.

Biological intelligence seems to derive from two principles that are found even at the level of single-celled organisms, adaptability and exploratory behavior, which is evident on every level of cellular life. Even within the human body, every cell with a nucleus manifests adaptability and exploratory behavior, at least during the initial formation of its cytoskeleton. The reason for the evolution of adaptability and exploratory behavior is that it is more efficient that programing every possible response in the genetic code. It is simply easier to program biological organisms to adapt and explore that it is to program them for an infinite number of situations that could possibly occur.

Since we are unlikely to be the only technologically advanced biological intelligence to have evolved in the Milky Way Galaxy over the previous 13 billion years, the fact that we have not been overwhelmed by non-biological intelligence suggests one of four probable, though mutually exclusive, conclusions. First, that non-biological intelligence is an impossibility and, at best, we can program pseudo-intelligent programs that produce inferior goods and services at a cheaper price than those produced by biological intelligence (as seen by our contemporary experience of automation). Second, that non-biological intelligence is possible but incapable of reproduction, meaning that it can only evolve spontaneously from an adequate medium, such as a neural net (which I tend to favor as an explanation). Third, that non-biological intelligence is possible and capable of reproduction, meaning that something prevents it from dominating the entire galaxy, suggesting that any biological intelligence that creates non-biological intelligence destroys it when it threatens the biological intelligence. Fourth, that non-biological intelligence is possible and capable of reproduction, destroys any biological intelligence that it encounters, and then goes into periods of stasis when new biological intelligence evolves, similar to the premise of Mass Effect.

Last edited by AlexanderHowl; 08-03-2018 at 01:58 PM.
AlexanderHowl is offline   Reply With Quote
Old 08-03-2018, 02:23 PM   #23
hal
 
Join Date: Aug 2004
Location: Buffalo, New York
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Quote:
Originally Posted by Flyndaran View Post
State clearly what laws of physics require evolved meat bags to be literally the only way for intelligence to exist.
The other thing to note in this statement is that science is finding more and more, that there are things not understood or haven't been mapped out properly. For example - the neurons within a brain was thought to be a given number. Now the thinking isn't just the number of neurons per se, but the 3d structure involved along with biochemistry. Even now, they're discovering that they don't quite understand why the ph (acidity) of a cell has bearing on why some cancer cells respond differently than others.

In short? It is very difficult to emulate with machines, what biology is doing with cell structure.

There's more to it than that, but ultimately? True self-awareness may not be a viable concept for machines - or...

We may find out some trick that SIMULATES it to large degree so as to almost make no difference.

In the end? As I've pointed out in other campaigns to my one player...

The more mankind automates things, the more issues we're going to have where the old model of "economics" will not apply so well. If you can't get a job, you can't buy things. If you can't buy things, you can't participate in the economy as it is currently set up. If you build competitors at the economic level - those who were hurting under the current economic regime are going to rebel against the competitors. Even now - there is talk about taxing corporations who go heavy into automation largely because of the depressed effect it has on the overall economy.

One thing that should be somewhat worrisome is this:

Since the dawn of time, the rich needed the poor to provide services. The medium of exchange between the rich who wanted a service or "thing" was coin, which itself is a symbolic form of barter. In the past, the Rich couldn't get ANYTHING unless someone made it. Now? Today? We can have a LOT of things made that do not require a person to pay attention to its manufacture.

The more automation that creeps into the process, the less will be the people who get "barter" rights for the goods the rich want or need.

The day may come where - as a result of the automation involved, population numbers will decrease. Need a Defender of your gated community? A robotic "sentry" that never sleeps, never asks for a raise, never becomes inattentive would be ideal. Making it more ideal is the fact that it won't be something you can intimidate by saying "If you don't do what we want, we'll go after your family". It also won't have a problem firing on any given population as ordered by its owner.

So, what can the rich have with automation?

Security
Buildings created as necessary
Repairs made of infrastructure
Monitoring processes such as manufacturing or power generation
Creation of artifacts
Creation of services
Computerized brute force research (Molecular properties etc)

The list goes on and on, and the more society turns to AI's to accomplish this, the less meat bodies will get paid for those services.

Anyone remember the Time Traveller with the two stratified portions of society? Would be interesting to see what society becomes after automation and AIs become very common for any and everything. Heck, even Lawyers are now getting worried that their skills and services for boilerplate contracts and the like will be taken over by AI systems. Diagnosis AI systems are starting to come into their own - but each of these "AI" expert systems are essentially stand alone products, not the hitching of a database onto a "computerized" brain that and run multiple expert systems all at the same time, and we're currently at TL 8 per GURPS 4e.

So, I'm with a few others here who don't believe we're going to see SAI's any time soon. Even if we COULD make them, I largely believe that the completion for resources within society will render the creation of large scale SAI systems a moot point.

That's just my gut feeling...
hal is offline   Reply With Quote
Old 08-03-2018, 02:40 PM   #24
vicky_molokh
GURPS FAQ Keeper
 
vicky_molokh's Avatar
 
Join Date: Mar 2006
Location: Kyïv, Ukraine
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Quote:
Originally Posted by evileeyore View Post
I've got a pretty solid line of where I draw "Intelligence"*... and a lot of animals have crossed it. I haven't found evidence yet of AI crossing it.


* Read as the steps from sentience to sapience and finally sophonts. There are a lot of animals I class as sapient. They lack only the ability to communicate effectively† with us to be consider 'sophonts'.

† There are animals I consider as sophonts as well.
An example would be translating from one language into another. It used to be treated as something that's assumed to require having, and one of the criteria of possessing, intelligence. But now that computers are finally able to do it to a useful extent, the goalposts have been moved.
__________________
Vicky 'Molokh', GURPS FAQ and uFAQ Keeper
vicky_molokh is offline   Reply With Quote
Old 08-03-2018, 02:44 PM   #25
Flyndaran
Untagged
 
Join Date: Oct 2004
Location: Forest Grove, Beaverton, Oregon
Default Re: Keeping humans relevant in the shadow of TL10 AI.

My instinctive response to that was always that "true" intelligence requires successful problem solving in fully unfamiliar situations. Then I realize that I suck at that, and I believe I'm intelligent as it's used in this thread.
__________________
Beware, poor communication skills. No offense intended. If offended, it just means that I failed my writing skill check.
Flyndaran is offline   Reply With Quote
Old 08-03-2018, 03:32 PM   #26
David Johnston2
 
Join Date: Dec 2007
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Quote:
Originally Posted by vicky_molokh View Post
An example would be translating from one language into another. It used to be treated as something that's assumed to require having, and one of the criteria of possessing, intelligence..
And there's a reason for that. This reason: https://www.youtube.com/watch?v=RdXjxlFM6Jo. This is the product of "translating" without understanding. Nobody assumed that you couldn't mechanize the process of looking up isolated words in a bilingual dictionary. You just can't have the result reliably convey the meaning of a sentence because the machine doesn't understand what things really mean. And in closing I'd just like to say Мій судно на повітряній подушці повно вугрів.
David Johnston2 is offline   Reply With Quote
Old 08-03-2018, 04:58 PM   #27
David L Pulver
AlienAbductee
 
David L Pulver's Avatar
 
Join Date: Aug 2004
Location: In the UFO
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Quote:
Originally Posted by Michael Thayne View Post
As far as spaceship combat goes, if drone AI only has skill 11 while it's very possible to train humans up to skill 14, that's a non-trivial advantage, but it doesn't really change the fundamental problem with the combat side of Spaceships in that missiles are still ridiculously under-powered. If you're firing missiles at a SM+8 frigate, base skill 11 + 2 (from sAcc) + 8 (from target SM) + 4 (from proximity detonation) is effective sill 25, which virtually guarantees 10 hits.
I'm not sure what you mean by under-powered - your example seems to suggest they are scoring more hits than you find desirable.

I do suspect that due to a confluence of factors and rules changes (the reduction in minimum speed of missiles that was added to lower missile damage in later printings) that ballistic attacks may presently be too easy.

Attempting to go through very early drafts of the rules and playtest notes is difficult, but I have a sneaking suspicion that this could reflect a conceptional error that may have over-rated the targeting modifiers for ballistic attacks (perhaps by accidentally including a targeting system bonus in addition to inherent attack bonuses, which shouldn't apply considering the type of guidance system that is assumed).

If that's the case, it might be realistic to increase the penalty on ballistic attacks.

If so, I'd experiment by changing it so that you have +0 to hit if you use proximity fuse and -4 if you do not, and then also further reduce the missiles (only) to (TL-12) for the lower size and (TL-11) for the higher size.

Please note that this does not represent an errata, but rather a suggestion for testing.
__________________
Is love like the bittersweet taste of marmalade on burnt toast?

Last edited by David L Pulver; 08-03-2018 at 05:05 PM.
David L Pulver is offline   Reply With Quote
Old 08-03-2018, 05:17 PM   #28
David L Pulver
AlienAbductee
 
David L Pulver's Avatar
 
Join Date: Aug 2004
Location: In the UFO
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Quote:
Originally Posted by AlexanderHowl View Post
Fourth, that non-biological intelligence is possible and capable of reproduction, destroys any biological intelligence that it encounters, and then goes into periods of stasis when new biological intelligence evolves, similar to the premise of Mass Effect.
A fifth and perhaps simplest option is that we are the first sapient biological intelligence to evolve in the universe, at least excluding anything stuck at the bottom of a Europa-style ice world or something, and that since biological intelligence is a prerequisite for non-biological intelligence, we just haven't gotten around to making one yet.

Although it certainly helps, I'm not sure you need sapient or even near-sapient AI to spread robot probes through the galaxy on billion-year time scales, even ones that aren't that intelligent. Even if you think a few thousand year refinement of current computer programming can't figure out how to set up an automated robot factory that can slowly build a copy of itself, there's always cyborg brains or genetic material/crews in suspended animation. So "no sapient AI' doesn't necessarily get one out of the Fermi Paradox...
__________________
Is love like the bittersweet taste of marmalade on burnt toast?
David L Pulver is offline   Reply With Quote
Old 08-03-2018, 06:23 PM   #29
AlexanderHowl
 
Join Date: Feb 2016
Default Re: Keeping humans relevant in the shadow of TL10 AI.

Recent estimations have there being around 5,000 alien civilizations in the Milky Way Galaxy currently (https://www.astrobio.net/alien-life/...ons-are-there/). Since that would mean that there are up to 80 million stars per civilization, it is unlikely that we will be meeting any of them anytime soon. If we extrapolate that our technological civilization at 10,000 years old is fifty percent of the average lifespan of a technological civilization of 20,000 years, and if we extrapolate that the first round of civilizations would have evolved 10 billion years ago, there have been an estimated 2.5 billion technological civilizations in the Milky Way Galaxy, meaning that there have probably been a lot of opportunities for volitional AI to evolve and spread.
AlexanderHowl is offline   Reply With Quote
Old 08-03-2018, 07:17 PM   #30
(E)
 
Join Date: Jul 2014
Location: New Zealand.
Default Re: Keeping humans relevant in the shadow of TL10 AI.

One idea that might work. What if A.I.s beyond a certain threshold required constant attention and stimulation to remain sane and/or functional? In some cases the support required might be quite involved. Depending on your setting assumptions of course.
__________________
Waiting for inspiration to strike......
And spending too much time thinking about farming for RPGs
Contributor to Citadel at Nordvörn
(E) is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 07:16 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.