Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-13-2018, 02:29 AM   #1
VonKatzen
Banned
 
Join Date: Mar 2018
Default No AI/No Supercomputers: Complexity Limits?

Assuming that quantum computers and digital AI are not actually possible what limit does that put on computer Complexity ratings and software functionality?

For example, dumb programs can be brute forced to solve certain problems and cleverly arranged to solve other issues. But the more complex they get the harder it is to get the desired results and the more layers of error become possible (because no human being can possibly keep track of all the sub-systems of programs the program he programs is running). Likewise, brute-forcing is limited if you can't build computers that are way faster than modern computers (which may in fact not be possible, Kurzweilian predictions aside).

My initial thoughts are to put some limits maximum complexity as well as what skills a computer can possibly emulate (with Familiarity penalties for trying to use a computer to do something complex it's not specifically designed to do).

It also limits the utility of massive data storage and retrieval capabilities. The IRL NSA has already run into having so much data that you can't possibly find what you want unless you already know what you're looking for. With trillions of terabytes of trivial data this could make it actually difficult to use the 'internet' to find anything useful.

This actually opens up some more utility for PCs in Ultra-Tech settings (they can't use technology to replace verstehen-based problems, only narrow technical ones).

A Few More Thoughts:
If raw power has limits and there is no true AI then computers may be developed to be more elegant - more reliable, simpler, more specialized. They become easy-to-use or ultra-specialized tools instead of all-purpose rigs. This would also make hacking/controlling digital machines harder, as they'd be disconnected both physically and in terms of engineering.

Pyramid #37 has some alternate rules for computers that might suit this setting/assumption, too, since they don't really get into AI rules in the first place.

Last edited by VonKatzen; 03-13-2018 at 02:45 AM.
VonKatzen is offline   Reply With Quote
Old 03-13-2018, 05:02 AM   #2
Celti
 
Celti's Avatar
 
Join Date: Jun 2007
Location: USA, Arizona, Mesa
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Assuming that quantum computers and digital AI are not actually possible what limit does that put on computer Complexity ratings and software functionality?
AI has always been a shifty definition, we'll hand you that one. Quantum computers are possible; the IBM QX is a 20-qubit demonstration quantum computer, and just last week Google demonstrated a 72-qubit quantum computer dubbed “Bristlecone.” They are expensive, clunky, and limited at the moment — but they are provably creatable. Everything from here on out is a matter of refining the engineering (certainly not a simple task, but not impossible).

Given that, a lot of your post can just be stepped past — quantum computers of sufficient capacity will be able to handle many of your described tasks just fine. Taking some of the rest in turn...

Quote:
Originally Posted by VonKatzen View Post
[...]but the more complex they get the harder it is to get the desired results and the more layers of error become possible (because no human being can possibly keep track of all the sub-systems of programs the program he programs is running).[...]
An individual human need not and should not keep track of exactly what is running at every level for the vast majority of tasks — do you know exactly what your operating system is doing at every level below the browser you wrote your post in? How about within that browser, between the Javascript engine and the CSS engine and the renderer and compositor and so on? At each layer of the system, the developer designs individual components that work with multiple components of the lower level to do their necessary tasks, and so and and so forth until the end user triggers a massive, massive series of programs to run — from a single outer layer, to produce their desired result.

Quote:
Originally Posted by VonKatzen View Post
[...]Likewise, brute-forcing is limited if you can't build computers that are way faster than modern computers (which may in fact not be possible, Kurzweilian predictions aside).[...]
Quantum computing will make some of that brute-forcing much faster; as for the potential speed limitations of future machines — even disregarding the potential of a Kurzweilian singularity, which I will happily do, we have every reason to think we are not currently using the most capable computing architecture we could assemble right now at our own TL, and that other architectures (both in terms of circuit design and in terms of physical technological basis) possible both right now and in the future could be far faster. It may not be possible to get faster, but it is highly probable that we can.

Quote:
Originally Posted by VonKatzen View Post
[...]My initial thoughts are to put some limits maximum complexity as well as what skills a computer can possibly emulate (with Familiarity penalties for trying to use a computer to do something complex it's not specifically designed to do).[...]
These are both frankly in place already. If you want to limit computational complexity, limit the TL of your computing technology — and if you want to do something with your computer you don't have the software for, then, well, you're going to be making Computer Programming rolls at pretty arbitrary penalties to make that software.

Quote:
Originally Posted by VonKatzen View Post
[...]If raw power has limits and there is no true AI then computers may be developed to be more elegant - more reliable, simpler, more specialized. They become easy-to-use or ultra-specialized tools instead of all-purpose rigs. This would also make hacking/controlling digital machines harder, as they'd be disconnected both physically and in terms of engineering.[...]
This is entirely cultural. Our modern western Terran humanity fairly consistently disregards ultra-specialised devices in favour of general-purpose computers whenever and wherever possible. The Vilani Imperium of Traveller does not. You can draw similar parallels to either throughout fiction.

Quote:
Originally Posted by VonKatzen View Post
Pyramid #37 has some alternate rules for computers that might suit this setting/assumption, too, since they don't really get into AI rules in the first place.
Regardless of any of the above, I recommend using the rules in that article (Thinking Machines, p. 16), as in my opinion they offer both more consistency and verisimilitude at any TL. They will suit with or without quantum computing (and in fact offer guidelines for at what point a computer of a given size and complexity is flatly impossible without involving quantum computing), and you can use or disregard AI or single-purpose machines as you see fit.
Celti is offline   Reply With Quote
Old 03-13-2018, 06:51 AM   #3
malloyd
 
Join Date: Jun 2006
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Assuming that quantum computers and digital AI are not actually possible what limit does that put on computer Complexity ratings and software functionality?.
None really. Nothing particularly prevents you from building computers the size of planets. For computers of the same size as we now build, well, we've only got a little more than 1 order of magnitude (1 complexity step) left on the chips before some of the circuit elements would need to be less than one atom wide, which isn't physically reasonable, but there is stuff outside the chips that could be shrunk too, and you could probably redesign chips for more interconnectivity that would increase complexity without changing their size too. Still we probably aren't more than 3 complexity steps away from the absolute physical limits.

Note however that there is no particular requirement for computers to be digital, and we *know* there is a physical implementation of intelligence that doesn't require a amount of huge hardware - it manages to run on human brains after all.
__________________
--
MA Lloyd
malloyd is offline   Reply With Quote
Old 03-13-2018, 06:59 AM   #4
The Colonel
 
The Colonel's Avatar
 
Join Date: Jul 2006
Default Re: No AI/No Supercomputers: Complexity Limits?

Isn't the ability to dump heat a significant factor in the size of computers?
The Colonel is offline   Reply With Quote
Old 03-13-2018, 07:18 AM   #5
AlexanderHowl
 
Join Date: Feb 2016
Default Re: No AI/No Supercomputers: Complexity Limits?

Only if you are using contemporary computer architecture. Anyway, it is not heat but heat by volume that is the problem. If you use a fluidics computer (where computation is done through an exchange of fluids rather than an exchange of electrons), you will never need to worry about heat (it will also be a million times larger for the same Complexity, but there are always trade offs).
AlexanderHowl is offline   Reply With Quote
Old 03-13-2018, 10:22 AM   #6
weby
 
weby's Avatar
 
Join Date: Oct 2008
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
Assuming that quantum computers and digital AI are not actually possible what limit does that put on computer Complexity ratings and software functionality?
Quote:

That is an interesting alternate history from the one where we are heading.

The short answer is: Yes there will be limits compared to a future where ASI(artificial super intelligence) is a thing.
For example, dumb programs can be brute forced to solve certain problems and cleverly arranged to solve other issues. But the more complex they get the harder it is to get the desired results and the more layers of error become possible (because no human being can possibly keep track of all the sub-systems of programs the program he programs is running). Likewise, brute-forcing is limited if you can't build computers that are way faster than modern computers (which may in fact not be possible, Kurzweilian predictions aside).
People do not really keep track of all the layers in current systems. Most large programs have a lot of ready made parts. Programming everything from scratch is simply not possible in any realistic time scale.

The project management only deals with the big picture and even the programmers tasked with linking to a certain library or other external resource do not have a clue normally how the external thing works except the inputs and outputs.

The reason why we are able to do more and more complex computer systems today is because there are more and more advanced building blocks. That will not change.

So the building blocks of future will likely be much higher abstraction level than today, the same way the building blocks of today are much higher level than the building blocks of 20 years ago.

Quote:
My initial thoughts are to put some limits maximum complexity as well as what skills a computer can possibly emulate (with Familiarity penalties for trying to use a computer to do something complex it's not specifically designed to do).
If you stop AI development at say year 2000 level, then indeed the computer cannot emulate a lot of skills as they require quite a lot of judgement calls. In the years since computers have started emulating more and more skills and better and better.

At that level computers can mostly only deal with really clear things. For a robot to move a thing, the exact thing moved and the path to move it must be described exactly, for a calculation you need to define the parameters exactly and so on.

I would not see that as complexity problem, but definitely a limit on what skills they can do.

Quote:
It also limits the utility of massive data storage and retrieval capabilities. The IRL NSA has already run into having so much data that you can't possibly find what you want unless you already know what you're looking for. With trillions of terabytes of trivial data this could make it actually difficult to use the 'internet' to find anything useful.
If you take away the ANI(artificial narrow intelligence) assistance that we have now, searching for things will indeed become very hard.

The massive data is actually several related problems: data storage and transfer speed, indexing methodology and efficiency, data interconnects, distilling the important data from the unimportant.

AI assistance really only helps the last one. Like CERN could not deal with their data without throwing most of it away before saving only potentially interesting data, so they have an ANI doing filtering before saving.


Quote:
This actually opens up some more utility for PCs in Ultra-Tech settings (they can't use technology to replace verstehen-based problems, only narrow technical ones).
Indeed, thus my scifi setting had big problems with ASI in the distant past so the anything an ANI is forbidden.

Quote:
A Few More Thoughts:
If raw power has limits and there is no true AI then computers may be developed to be more elegant - more reliable, simpler, more specialized. They become easy-to-use or ultra-specialized tools instead of all-purpose rigs. This would also make hacking/controlling digital machines harder, as they'd be disconnected both physically and in terms of engineering.
That is a design choice.

We used to have separate machines to do different things, but moved to general purpose things, because getting a new program for such is so much simpler than getting a new machine to do a new thing.

Having or not having AI will not really matter for that. It comes for to the cultural choices.


Quote:
Pyramid #37 has some alternate rules for computers that might suit this setting/assumption, too, since they don't really get into AI rules in the first place.
Those rules are much better than the base rules regardless of if you use AI or not.
__________________
--
GURPS spaceship unofficial errata and thoughts: https://gsuc.roto.nu/
weby is offline   Reply With Quote
Old 03-13-2018, 10:28 AM   #7
weby
 
weby's Avatar
 
Join Date: Oct 2008
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by malloyd View Post
Nothing particularly prevents you from building computers the size of planets.
With advanced enough engineering, true enough.

But currently super computers are facing a set of large problems with interconnectivity and thus do not reach their theoretical capabilities and the more you scale them up the more drastic the problem becomes. So we need to get those problems solved before we can build much larger computers than we do today.
__________________
--
GURPS spaceship unofficial errata and thoughts: https://gsuc.roto.nu/
weby is offline   Reply With Quote
Old 03-13-2018, 11:36 AM   #8
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by malloyd View Post
Still we probably aren't more than 3 complexity steps away from the absolute physical limits.
Well, unless we can figure out a better way of exploiting the third dimension. Circuits are still surface features of chips, work out a way to layer them vertically and there's a whole bunch of growth potential.
__________________
My GURPS site and Blog.
Anthony is online now   Reply With Quote
Old 03-13-2018, 09:49 PM   #9
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

The problem with AI is that intelligence is physical architecture, not software (at least that's the view of Searl, and one that I agree with). That's not to say AI is impossible, but it will not be a digital machine and when it is built it will operate substantially differently from humans (unless it's a brain) and require entirely different approaches to teach than programming possesses.

As far as 'quantum computers', while the principle works the engineering of a functionally useful one may be impossible. Error and signal-noise ratio, and various problems with quantum mechanics, may keep them from being anything other than a novelty. As I'm taking a very conservative approach I'd prefer to err on the side of 'doesn't actually work', especially since many mathematicians and quantum mechanics physicists seem to believe they won't.

I've taken the same approach with power cells. Vastly more efficient and powerful large scale generators, basically fission plants, are certainly possible. But the chemical science behind small batteries may be essentially limited to improvements in reliability and rechargeability rather than drastic increases in power. This in itself helps to rule out flying powered armor and high-powered laser guns. (There are also other grounds to believe that these would not actually be that useful compared to much cheaper and more rugged expansions upon existing technology).

Most of the technology we use today is an extrapolation of stuff from the 1890s to the 1950s. Despite the Science! community selling us a bill of goods I think it's worthwhile to think about whether this is the same botched fantasy that told us jetpacks would be practical and widespread by 1973. The growth of the 19th and 20th centuries may in fact only be so 'fantastic' because it was bottlenecked by economic and political issues for thousands of years. We may actually be reaching the zenith of technology already in the 21st century, with further improvements being mainly in biotechnology and a reduction in cost to existing machinery. This may have large scale practical effects in warfare, travel and daily life, but it's at least plausible that there in fact will never be any nanotechnological engineering swarms or computers that are anything more than extremely fast, small and expensive versions of a set of dominos (basically what all electronic computers are). Though people apply the term 'AI' to some existing systems it is in fact mindless jerry-rigging of this domino game, as anyone who's ever tried to get software to do anything 'out of the box' can tell you it is completely without any intelligence. 'Pseudo-intelligence' is probably a better term for what we have, and it's severely limited in its ability because it understands literally nothing. All modern software depends on a user and developer to make detailed and tedious adjustments for its invariable and hilarious screwups.

Keep in mind that corporations and developers always oversell their products, and grant-funded researchers always claim that their new hobby 'might lead to the cure for cancer'. IBM has a horrific track record on keeping its promises, so I'd just as soon assume that at least 50% of what they say is either advertising or speculation dressed up as science. When I see a commercially available quantum computer that actually does normal computer stuff in a useful way I might change my mind, but not until them. The same goes for pseudo-AI - so far digital 'intelligence' like Wolfram Alpha makes mistakes that a retarded child would never make. It's dumb as a box of rocks, and is essentially the world's most expensive box of rocks.

I do think that bioengineering and materials science has a very realistic and rosy future, so I'm more lenient on that. Stronger, lighter, generally more useful macro-machines and the ability to select for and reengineer certain traits in human beings is making very serious progress that - so far - much of the rest of Science! can only daydream about. For this reason I've ruled that some commercial space travel and settlement is plausible, though I do not think (for economic reasons - literally every factory and useable resource is already on Earth, which is far superior to any artificial habitat and much easier to get to) that it will be limited to NEO and Lunar projects in the majority. Especially since I don't have super-cells on the micro-level, anything that goes a long way needs a fission reaction and fission reactors are heavy and potentially quite dangerous. That means no personal space cars or any space inhabitants who weren't born there or have a very high paying job - at least not unless you're a billionaire.

Examples:
https://www.technologyreview.com/s/6...-the-skeptics/

In most cases - and especially this game - I am much guided by curmudgeons like Winchell Chung rather than the fantasist notions of Michio Kaku.

Last edited by VonKatzen; 03-13-2018 at 10:12 PM.
VonKatzen is offline   Reply With Quote
Old 03-13-2018, 10:07 PM   #10
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
The problem with AI is that intelligence is physical architecture, not software (at least that's the view of Searl, and one that I agree with).
Searl is notable for the utter nonsense that comes out every time he talks about computers. There's really only two possibilities:
  1. The brain is a hypercomputer. In that case we can't run an AI on a conventional computer, but presumably we can build hypercomputational hardware and then run AI software on it. Note that, as far as anyone can tell, no system based on ordinary matter can possibly be a hypercomputer.
  2. The brain is not a hypercomputer. In that case, we can emulate it with software, though doing so efficiently may require special purpose hardware.
__________________
My GURPS site and Blog.
Anthony is online now   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 11:55 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.