Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-13-2018, 10:17 PM   #11
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Anthony View Post
Searl is notable for the utter nonsense that comes out every time he talks about computers. There's really only two possibilities:
  1. The brain is a hypercomputer. In that case we can't run an AI on a conventional computer, but presumably we can build hypercomputational hardware and then run AI software on it. Note that, as far as anyone can tell, no system based on ordinary matter can possibly be a hypercomputer.
  2. The brain is not a hypercomputer. In that case, we can emulate it with software, though doing so efficiently may require special purpose hardware.
No, you can't emulate it, because it's an ontological product of actual interactions between material substances. You can't emulate a bomb and get an explosion. Knowing the law of gravity does not make you fall. This is exactly what's wrong with digital AI. Intelligence is not data, it is actual machinery. Data has no meaning whatsoever without an intelligence machine to interpret it. It's just lightning otherwise. Computers know nothing, and they will never know anything, because they're computers. They're domino sets made of electrons and silicon. You need a different architecture to get actual IQ. A computer has no more chance of sentience than a post-it note or a tractor.

You can't create understanding unless the machinery for understanding exists. This may be possible in ways other than meat and water, but it's not going to be bunch of microprocessors. That's not to say that we wouldn't design an AI that could interface much more easily with digital computers. Or design computers to better interface with out own intelligence. But that's not the same thing as an intelligent computer.

As Searle says, the brain is a computer, but it's not just a computer. Computers are used by intelligence, they do not produce it.

How this affects my predictions for my TL9 world is that while AI may be 100% possible, the way we are currently going at it is (I believe) simply wrong from an engineering point of view, and if they keep going at it this way they're just spinning their wheels.

There are other issues with AI aside from this. Super-intelligent AI (true AI) may be impossible because super-intelligence is not possible. Focus and speed are often at cross purposes, and another axis is functionality. Though exceeding human capacity may be possible it may not be by much. A super-fast intelligent AI still needs to wait for things to happen, and it still needs to maintain focus and give commands. The actual upper limit of intelligence may be below GURPS IQ 20, no matter how well designed it is.

Futurism is 80% hockum and 20% speculation. It is entirely possible that the actual far future of humanity finds us living on Earth and in quasi-immortal cyborg bodies, and that's it. Beyond that I see as little evidence for the Singularity as I do the Rapture. And given that Science! has become something of the official religion of modern states I tend to believe it comes from the same psychological impulse, with just as little actual evidential support. Certainly most of its advocates understand physics and biology about as well as most other believers understand Thomistic theology.

Last edited by VonKatzen; 03-13-2018 at 10:50 PM.
VonKatzen is offline   Reply With Quote
Old 03-14-2018, 12:03 AM   #12
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by The Colonel View Post
Isn't the ability to dump heat a significant factor in the size of computers?
Currently, with current-day tech, yes. But there are things that can theoretically help. Reversible computation, for ex, can theoretically build a cooler-running computer.

A more basic limit is signal propagation time. The further apart the physical components of the machine, the farther the signal has to go between component A and component B, the slower the machine gets. The limiting factor (as far as we know) is the speed of light.

It might not sound like much, an extra microsecond for that signal to cover the distance, and it isn't, for a single operation. If the computer is doing a billion operations, though, that adds over sixteen minutes to the activity. A lot of processor design is trying to keep the distances the signals must travel down.

A computer the size of the Earth would have components on one side .04 light-seconds from the components on the other, that fastest possible signal exchange would require .04 seconds between those points. For activities involving the whole machine, that implies fairly slow processing. I could imagine that such a computer might be very, very 'smart', but not very fast.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is online now   Reply With Quote
Old 03-14-2018, 02:27 AM   #13
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
No, you can't emulate it, because it's an ontological product of actual interactions between material substances.
That doesn't even make sense. Ontology is a philosophical construct, it doesn't have material existence.
Quote:
Originally Posted by VonKatzen View Post
You can't emulate a bomb and get an explosion.
That's because one of the outputs of the system is an explosion. There are some outputs of a person that need a body, but it's debatable if they're essential, and even if they are it just means your AI needs to be embodied.
Quote:
Originally Posted by VonKatzen View Post
You can't create understanding unless the machinery for understanding exists. This may be possible in ways other than meat and water, but it's not going to be bunch of microprocessors.
You base this assertion on what? Until you can usefully classify what consciousness is, you can't make reliable assertions about what sorts of systems can generate it.
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Old 03-14-2018, 03:42 AM   #14
weby
 
weby's Avatar
 
Join Date: Oct 2008
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post
The problem with AI is that intelligence is physical architecture, not software (at least that's the view of Searl, and one that I agree with). That's not to say AI is impossible, but it will not be a digital machine and when it is built it will operate substantially differently from humans (unless it's a brain) and require entirely different approaches to teach than programming possesses.
That Searl sounds like someone who does not understand what intelligence is. Intelligence is at it's bases the ability to learn and then apply what you have learned. It is not about how you do that.

We already have today a lot of ANI(artificial narrow intelligence) around us, they are in fact so common that we do not think of many of them as such. Only when they are publicized as being much better than humans at something(like beating humans at chess) do we take note and then only briefly. (see https://en.wikipedia.org/wiki/AI_effect )

You likely use several such every day and worldwide they are used in everything from image recognition to airline pricing to warehouse optimization and thousands of other things.

We already live in the age of ANI, we just do not call a lot of it AI because "it is just technology and AI is magic"

Quote:
As far as 'quantum computers', while the principle works the engineering of a functionally useful one may be impossible. Error and signal-noise ratio, and various problems with quantum mechanics, may keep them from being anything other than a novelty. As I'm taking a very conservative approach I'd prefer to err on the side of 'doesn't actually work', especially since many mathematicians and quantum mechanics physicists seem to believe they won't.
Yes, quantum computers are not proved yet to be scalable. But you do not need quantum computing for AGI(artificial general intelligence)

Quote:
Most of the technology we use today is an extrapolation of stuff from the 1890s to the 1950s.
A lot of technology is but a lot of technology is not.

We have things like instant access to so much information that was unheard of just 30 years ago.

We also have amazing systems that automate, learn and improve in a way that was inconceivable to most people even 10 years ago.

As example one of our customers is a fairly small manufacturing company making steel parts. They have been our customer from 1994

I can explain some of the technological changes that have happened in that one company during those years just as a single example.

One of the things they do is cut shapes from steel plate according to drawings.

In 1994 they we in process of transitioning from traditional cutting where someone steers the plasma jet to a robot doing it. The production planning had moved from cut paper pieces to a CAD program about a year earlier.

One of the problems with their thing is that there are hundreds of grades and types of steel and they come in vary varying thicknesses. They all thus have different thermal properties, so you have to do things differently and plan safety margins between pieces to be different depending on the material.

Also the placing of the different strange shapes on the plate requires rotating them and using the right piece of plate from the stock of half cut plates and such.

They had a good layout planner. When he then left the company four years later the scrap % went up about 40%, to go down slowly in the coming years as the planners learned their job, about 5 years later they were at about the same scrap % as before. So it took a human engineer about 5 years to learn the trade.

In 2012 they installed an automated planning system, it had about 20% higher scrap rate than the human planner, but could do the planning much faster and cost only about the same as the engineer in a year. In less than 3 years it had reached the same efficiency as the human, and in 2017, about 5 years after the installation, the scrap rate is about 35% less than the best humans could do.

That is Artificial narrow intelligence in action. Only it is not called AI by the company or by the manufacturing, instead it is "self optimizing", and yet it learned both faster and better than a human.

Quote:
Despite the Science! community selling us a bill of goods I think it's worthwhile to think about whether this is the same botched fantasy that told us jetpacks would be practical and widespread by 1973.
Because of the rapid technical growth seeing 25 years into future is not easy/possible. Seeing 5 years is much more possible.

Quote:
The growth of the 19th and 20th centuries may in fact only be so 'fantastic' because it was bottlenecked by economic and political issues for thousands of years. We may actually be reaching the zenith of technology already in the 21st century, with further improvements being mainly in biotechnology and a reduction in cost to existing machinery.
No, it is a result of using existing tools to build better tools and repeating. This is most easily visible in the computer industry where larger and more advanced "prefabs" are available every year than previous years. They are then combined to create even more complex systems and so on.

Quote:
This may have large scale practical effects in warfare, travel and daily life, but it's at least plausible that there in fact will never be any nanotechnological engineering swarms or computers that are anything more than extremely fast, small and expensive versions of a set of dominos (basically what all electronic computers are).
We already have nano size motors and such created in laboratories, but they are slow to make, require a lot of expensive equipment and time for a single one, but that is really just an engineering challenge.

Computers in 1950 were slow to make, requires a lot of expensive equipment and time for a single one, whereas today they still require expensive equipment to make that expensive equipment produces huge numbers of computers. That is engineering.

Quote:
Though people apply the term 'AI' to some existing systems it is in fact mindless jerry-rigging of this domino game, as anyone who's ever tried to get software to do anything 'out of the box' can tell you it is completely without any intelligence. 'Pseudo-intelligence' is probably a better term for what we have, and it's severely limited in its ability because it understands literally nothing. All modern software depends on a user and developer to make detailed and tedious adjustments for its invariable and hilarious screwups.
No, that was AI about year 2000, back then "AI" was mostly still lists of "if-thens". Today the systems are more and more self learning.


Quote:
Keep in mind that corporations and developers always oversell their products, and grant-funded researchers always claim that their new hobby 'might lead to the cure for cancer'. IBM has a horrific track record on keeping its promises, so I'd just as soon assume that at least 50% of what they say is either advertising or speculation dressed up as science. When I see a commercially available quantum computer that actually does normal computer stuff in a useful way I might change my mind, but not until them. The same goes for pseudo-AI - so far digital 'intelligence' like Wolfram Alpha makes mistakes that a retarded child would never make. It's dumb as a box of rocks, and is essentially the world's most expensive box of rocks.
You have to remember that most inventions by far never lead anywhere. If IBM was really 50% accurate they would be way better than everyone else. But in fact they are not.

Basically progress is like throwing a lot of things at a wall. Eventually some will stick and we have progress, but most things will just fall down. But given the huge number of throws we do today progress as whole is still high.


Quote:
Originally Posted by VonKatzen View Post
No, you can't emulate it, because it's an ontological product of actual interactions between material substances. You can't emulate a bomb and get an explosion. Knowing the law of gravity does not make you fall. This is exactly what's wrong with digital AI. Intelligence is not data, it is actual machinery. Data has no meaning whatsoever without an intelligence machine to interpret it. It's just lightning otherwise. Computers know nothing, and they will never know anything, because they're computers. They're domino sets made of electrons and silicon. You need a different architecture to get actual IQ. A computer has no more chance of sentience than a post-it note or a tractor.
That is wrong. Intelligence is the ability to learn and apply the learned things. It has nothing to do with the actual process of how that is done.


Note because of length snipped a lot of things about your game world assumptions, they are fine, after all the GM sets the game world parameters, my discussion is not about that, just the claim that that is the real world.. :)
__________________
--
GURPS spaceship unofficial errata and thoughts: https://gsuc.roto.nu/
weby is offline   Reply With Quote
Old 03-14-2018, 04:25 AM   #15
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Anthony View Post
That doesn't even make sense. Ontology is a philosophical construct, it doesn't have material existence.
Ontology is a set of beliefs in a human mind. But those are specifically beliefs about what kinds of things exist. And many of the things that exist, have material existence, independent of the human mind.

This isn't really any different from saying that ecology is a set of observations and theories that exists in human minds (and media such as books and Web pages), but we can also talk about "the ecology of Greenland," meaning not what people think about the web of life in Greenland, or even what people in Greenland think about the web of like, but that actual web of life that exists in Greenland. Or saying that physics is a construct in the human mind, but physical reality is not.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is online now   Reply With Quote
Old 03-14-2018, 04:56 AM   #16
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

I think maybe "AI" and "supercomputer" are not the right terms. We have "artificial intelligence" in the real world; for example, my desktop has a program that uses AI to decide which incoming e-mail should be classified as junk. But it's an idiom; it doesn't mean the same thing as "intelligence" that is "artificial." Similarly, there have been a lot of "supercomputers," which are high-end computers that can be used to address difficult scientific problems, from the Cray-1 to the Blue Gene series. But they're not computers that transcend the concept of computation in some way. In fact the Cray-1 had about the same clock speed as current desktop models, and significantly less RAM, so its raw computational power was comparable or maybe marginally lower (though its programming was different).

A word means what it is used to refer to.

As far as Complexity is concerned, what do you take to be a measure of Complexity? People who talk about actual computers sometimes seem to be interested in how many "flops" they're capable of; and there's been a continuing upward trend there. Of course that's not the same as saying that we can write code that uses those flops for anything productive. But would you take "Complexity" as referring to the log of flops to some base, with some number of flops as the zero point of the scale? Or do you have something else in mind?
__________________
Bill Stoddard

I don't think we're in Oz any more.

Last edited by whswhs; 03-14-2018 at 05:02 AM.
whswhs is online now   Reply With Quote
Old 03-14-2018, 06:51 AM   #17
malloyd
 
Join Date: Jun 2006
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by weby View Post
That Searl sounds like someone who does not understand what intelligence is. Intelligence is at it's bases the ability to learn and then apply what you have learned. It is not about how you do that.
Probably the fundamental problem of artificial intelligence is *nobody* really knows what "intelligence" is. Learning algorithms are useful tools, but pretty clearly not intelligent in the same sense as people, or even smarter animals. And we have no idea if that needs a "yet" or is a fundamental feature of them and they never will be.

Quote:
Yes, quantum computers are not proved yet to be scalable. But you do not need quantum computing for AGI(artificial general intelligence)
Most of the hype about quantum computation is nonsense - at its heart all it is is a method of solving a particular class of problems quickly. It's an *important* class of problems that happens to be hard to solve with conventional hardware - partly but not completely overlapping the ones we'd label "massively parallel", but it's not a magic process that will solve anything, wont tell us anything new about how to solve such problems (we already know, it just takes too long) or even speed up all kinds of computations, just this particular class of them.

But it's most interesting to AI debates as an example of something that may or may not be important to building an actual AI - hardware that works *differently* than a conventional computer. I have a strong suspicion that information processing technology is may be reaching the point where it's about to diversify into a bunch of things that are not computers in the sense we've used the word until now, but do some of the same kinds of stuff differently. This isn't unusual after all, to use your example technology consider the different ways of cutting stuff. Digital computers start out as stone knives, and have been refined into really, really good stone knives, but that's still what they are. Quantum computers might count as saw blades, or maybe tool steel. But what we really want is somebody to invent plasma torches (or sonics, or hot wire plastic cutters, or acids....)
__________________
--
MA Lloyd
malloyd is online now   Reply With Quote
Old 03-14-2018, 08:00 AM   #18
AlexanderHowl
 
Join Date: Feb 2016
Default Re: No AI/No Supercomputers: Complexity Limits?

It is a good analogy, though I think that digital computer are bronze knives rather than stone knives (that would be mechanical computers). In a century, our descendants will probably think of our computers much in the way that we think of cars from the 1910s (primitive, quaint, slow, and lacking any real comfort). If digital intelligence is a possibility, it will probably use radically different architecture than contemporary computing in order to mimic biological complexity (for example, optical computers using rotating polarity gates for ten state computing or something similarly different). Or we might just ditch digital entirely and convert to biological computers that use protein coding and viral transcription for computation. We really do not know.
AlexanderHowl is offline   Reply With Quote
Old 03-14-2018, 10:54 AM   #19
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by malloyd View Post

Quote:
Originally Posted by weby View Post
That Searl sounds like someone who does not understand what intelligence is. Intelligence is at it's bases the ability to learn and then apply what you have learned. It is not about how you do that.

Probably the fundamental problem of artificial intelligence is *nobody* really knows what "intelligence" is.
This^^^, assuming if by 'intelligence' we mean 'consciousness'.

Intelligence defined as learning and doing is certainly present in today's computers, it was present in a very limited form in ENIAC. But here's where language becomes a trap.

The phrase 'artificial intelligence' is weasel-line now. It's been redefined by the so-called professionals so many times that it can be used to mean almost anything.

But what it mean originally was artificial people, or at least artificial conscious beings embodied as hardware/software. Beings that could originate new thoughts and ideas, have desires or intentions of their own, etc. That's what the professionals originally meant by it. It meant HAL 9000, Colossus, Skynet, R2-D2, V.I.N..CENT, R. Daneel Olivaw.

That's what Minsky et al meant by 'artificial intelligence', originally.

Over time, as it became clear that not only did the 'experts' have no idea what consciousness even was, but that there was no immediate prospect of it appearing in computers, the term 'artificial intelligence' was repeatedly redefined, until now it's mostly used to talk about 'machine learning' and improving sorting and comparison algorithms, often with some empty hype for the rubes along the way (like the infamous Watson on Jeopardy).

Quote:


Learning algorithms are useful tools, but pretty clearly not intelligent in the same sense as people, or even smarter animals. And we have no idea if that needs a "yet" or is a fundamental feature of them and they never will be.
Though some 'experts' still try to kid themselves on this point. They will make elaborate statements about why this or that produces output that could be called 'conscious', but when you look carefully it's still clear that they have no idea what 'conscious' even means, except empirically.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is online now   Reply With Quote
Old 03-14-2018, 11:11 AM   #20
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by weby View Post

Quote:
Originally Posted by VonKatzen View Post
Most of the technology we use today is an extrapolation of stuff from the 1890s to the 1950s.
A lot of technology is but a lot of technology is not.
The large majority is. There has been very little genuinely new technology over the last few decades, most of it is refinements of what we already have.
Some of the refinements have been fairly sharp, admittedly, but nothing basically new.

Quote:

We have things like instant access to so much information that was unheard of just 30 years ago.
Almost all the technology of which is based directly on things developed before 1970, and a lot of it before 1950. We've made it smaller, but it's still the same basic tech. Even integrated circuits are decades old.

Quote:

We also have amazing systems that automate, learn and improve in a way that was inconceivable to most people even 10 years ago.
Still based on pre-1970 tech.

Quote:

As example one of our customers is a fairly small manufacturing company making steel parts. They have been our customer from 1994

I can explain some of the technological changes that have happened in that one company during those years just as a single example.

One of the things they do is cut shapes from steel plate according to drawings.

In 1994 they we in process of transitioning from traditional cutting where someone steers the plasma jet to a robot doing it. The production planning had moved from cut paper pieces to a CAD program about a year earlier.

One of the problems with their thing is that there are hundreds of grades and types of steel and they come in vary varying thicknesses. They all thus have different thermal properties, so you have to do things differently and plan safety margins between pieces to be different depending on the material.

Also the placing of the different strange shapes on the plate requires rotating them and using the right piece of plate from the stock of half cut plates and such.

They had a good layout planner. When he then left the company four years later the scrap % went up about 40%, to go down slowly in the coming years as the planners learned their job, about 5 years later they were at about the same scrap % as before. So it took a human engineer about 5 years to learn the trade.

In 2012 they installed an automated planning system, it had about 20% higher scrap rate than the human planner, but could do the planning much faster and cost only about the same as the engineer in a year. In less than 3 years it had reached the same efficiency as the human, and in 2017, about 5 years after the installation, the scrap rate is about 35% less than the best humans could do.

That is Artificial narrow intelligence in action. Only it is not called AI by the company or by the manufacturing, instead it is "self optimizing", and yet it learned both faster and better than a human.
But it manifestly is not AI in the original sense of the term, and everything you listed is based on decades-old concepts. Implementation has improved enormously, but no more. This is not a Vingean exponentiating curve, it's just the steep part of a typical S curve, and the result of businesses being motivated to spend a lot to refine a few existing technologies extensively.

An example of a genuine new technology is the telegraph. Or radio. Or the airplane. Yes, they use what came before but do something fundamentally new and different with it. We had a wild explosion of science and technology from the late 1600s to the early 1900s. It was historically exceptional, though comparable periods have happened before.


Quote:

We already have nano size motors and such created in laboratories, but they are slow to make, require a lot of expensive equipment and time for a single one, but that is really just an engineering challenge.
Which may or may not be met. Too soon to say, though we can be reasonably confident that Drexler is pitching snake oil.

Quote:

Basically progress is like throwing a lot of things at a wall. Eventually some will stick and we have progress, but most things will just fall down. But given the huge number of throws we do today progress as whole is still high.
Improvement on what we already have is high. The emergence of new stuff has slowed enormously.

Much of that improvement, too, is the result of a somewhat improbable concentration of resources. We kept Moore's Law going for decades in large part by pouring ever-vaster resources into production processes, newer and exer-more-expensive facilities, finer and finer refinements to squeeze out theh potential of silicon.

Once reason some of the other substrates that used to be talked about a lot either never took off or remained niche is that we squeezed silicon so hard. It's a little like the internal combustion engine. Almost everything about it is the better part of a century old, but we've refined and refined and refined it to the point where it's far more efficient and effective than most engineers would have considered likely in 1920. But it's not new.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is online now   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 10:02 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.