Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Notices

Reply
 
Thread Tools Display Modes
Old 03-17-2018, 08:02 PM   #71
Ulzgoroth
 
Join Date: Jul 2008
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
Sure, and if you can set one up to do that, I'd be prepared to agree that it was sapient. But the man in the Chinese Room has no sensory access to anything but little pieces of paper. And Turing stipulated no capabilities for passing the Turing test other than being able to exchange messages by teletype. I'm not making any claims about what computer can or cannot do in principle; I'm only saying that if they don't have semantics葉he ability to relate information to the physical world, or intentionality傭ut only syntax葉he ability to manipulate symbol strings葉hen they aren't sapient, any more than a programmable calculator is when it tells me that 2^10=1024.
I'm pretty sure the point of the Chinese Room is that (so it purports anyway) you can build something indistinguishable from semantics out of syntax. You certainly can convert visual and temperature information into symbol strings that can be manipulated syntactically - that's what the proposed computer would be doing since computers don't do anything that isn't describable as syntactic manipulation.

...Of course, the more troubling thing is that you're making the difference between sapience and non-sapience come down to a set of working eyes.
__________________
I don't know any 3e, so there is no chance that I am talking about 3e rules by accident.
Ulzgoroth is offline   Reply With Quote
Old 03-17-2018, 08:07 PM   #72
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
I'm not making any claims about what computer can or cannot do in principle; I'm only saying that if they don't have semantics葉he ability to relate information to the physical world, or intentionality傭ut only syntax葉he ability to manipulate symbol strings葉hen they aren't sapient, any more than a programmable calculator is when it tells me that 2^10=1024.
Bear in mind that the Turing test is not claiming that -- the entity at the other end of the test might be human, after all, and if not, it might still have the ability to interact with the physical world. It's just that your only means of communication is a teletype machine.

It is, of course, legitimate to say that conversing with someone over a teletype doesn't give enough information to decide whether they're sentient, though we don't act that way (I've never communicated with you by any medium that's meaningfully more capable than a teletype, and I'm reasonably confident you're sentient).
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Old 03-17-2018, 08:50 PM   #73
whswhs
 
Join Date: Jun 2005
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ulzgoroth View Post
...Of course, the more troubling thing is that you're making the difference between sapience and non-sapience come down to a set of working eyes.
Unless that's just synedoche, it's far too narrow. What I'm saying you need is embodiment, physical location, and sense perception. That's what gives you the ability to be conscious OF something. A blind and deaf person might still have that; I'm not saying that Helen Keller wasn't conscious. But I see consciousness as a relation between a physical entity and other physical entities.
__________________
Bill Stoddard

A human being should know how to live fast, die young, and leave a beautiful corpse. Specialization is for insects.
whswhs is offline   Reply With Quote
Old 03-18-2018, 12:16 AM   #74
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post


I tend to think that the 'FTL travel is not possible' arguments are pretty solid, as solid as literally any physical theory so far developed.
So was phlogiston, at one time. Ditto Newtonian mechanics. Our current knowledge is always solid until something shows it to be wrong or incomplete, and there's no way to assess when or if that something will happen.

The most that can ever be scientifically said on such matters is: "As far as we know, it can't be done." It cannot be shown to be impossible.

Yeah, within the limits of our knowledge, Special and General Relativity are solid. Will that hold in the future? Nobody knows, and assigning a probability of that holding is meaningless. It's either true, or not.

Quote:

I don't think that's quite true. For one thing, the technology to develop the Americas was quite similar to what was used to develop Europe. And lots of farmers made quite a good living, even in the early days. Poorer than Europe, yes - but it wasn't like it cost a billion dollars to get to North America or the West Indies.
True. One of the limiting factors on current space activities is that the Santa Maria was more technologically advanced, relative to its medium of activity, than our current spacecraft. That is, the Santa Maria was a more advanced seaship than our best spacecraft as spaceships. Current spacecraft are more comparable to the 'hollowed out log' stage of sea travel.

But there's no reason to assume that improvement of spaceships has stopped at the current level, either, and some pretty good reasons to assume they will continue to improve over time.

Quote:

The scale of distances is also not remotely comparable. The distance between the continents and the distance between planets are several orders of magnitude greater.
Which establishes nothing, in itself.

Quote:

I agree with basically every word of this blog post from cybereconomics on why interstellar travel is impossible.
He doesn't really present any arguments. He lists the barriers and difficulties, and then says, "It's really hard and we can't see a payoff, so it's not gonna happen."

That's not an argument. If he was arguing for a specific timespan of the near future, he might have a case, but on an open-ended timescale it's meaningless.

Quote:

Solar system colonization is a far less spectacular engineering feat, but suffers from most of the same problems. I not only think it's technically unfeasible I also think it's basically pointless. Eventually there may be some people further out into the solar system - as massive supply chains and gradually built up - but it won't be remotely like 'going to America'. America is not an uninhabitable Hellscape, which every single extraterrestrial body is. America was possible to colonize by siberian primitives on foot. Modern humans would die instantly anywhere else in the solar system, or in a few hours/days with the most advanced technology possible. And unlike the already sophisticated ship fleets and technology that had been developed for traveling across oceans (of which American colonization was merely an application) there exists no comparable technology to survive in and resupply over the much, much greater gaps in local space.
But there's no reason to think that will always be true. You're trying to use 'now' as the general standard, which is fallacious. Yes, the technological advancement we knew from the 17th through the 20th Centuries was historically exceptional...but not that exceptional. Comparable periods have existed before, and there's no reason to assume this was the last one.

Quote:

I also tend to believe that any society sufficiently advanced to overcome these difficulties would no longer be inhabited by human beings, and would probably not bother. Perhaps there may be robots spreading across the solar system like a steel cancer, but people? Nah. Really, if the Science! futurologists were accurate they ought to consider more than biological mankind is going to be extinct, and not just because 'evil robots destroy us' but because hyper-advanced AI replaces humans in all functions and we become nothing but pets incapable of competing from them and have all our agency and material resources stripped away.
Some of them do speculate about that. There's a whole school of thought that even yet expects the Computeraggedon, the optimists expect a techno-Rapute of the Geeks, the pessimists think we'll be rapidly replaced.

(I don't believe in the Computeraggedon and I have never have. Vinge's Singularity is mostly hype.)

Quote:

I think simple Darwinism would lead to that. It's also possible that many people would convert themselves into cyborgs indistinguishable from super-machines. If biological human beings exist into the higher 'TL' it will because a lot of stuff like nanomachines, super power sources, etc. are not possible or too expensive to bother with. I am by no means a convinced transhumanist, but if a lot of the science fiction technologies are possible then I think they lead there invariably.
Depends on which SF-like technologies happened. An old fallacy in SFnal speculation (and often by the general public) is 'if we can do 'x' of course we could also do 'y'. This is a fallacy because 'x' may be totally unrelated to 'y' in fact.

It's like somebody 100 years ago saying that if you could transplant hearts and have them beat, or install implants to let deaf people hear, of course you could do something simple like curing a chest cold...except of course it turns out that the latter is harder than either of the former, as well as quite different in nature.

But that 'looks' like a reasonable comparison, from the POV of 100 years ago.

Does strong AI imply 'uploading'? Maybe, but maybe not. We don't know enough to say. Are their limits to cyborgization that we have no idea about? No way to know until we get there. Some things people expect to be easy turn out to be hard, some things people (including experts) expect to be difficult turn out to be easy. And sometimes the experts turn out to be right.

"It's difficult to make predictions, esp. about the future."

Quote:

These things [ideology, religion, culture, etc.] certainly matter in terms of the contours of society, but there are physical and economic laws that don't change, regardless of where you go and who you are.
But how you interact with those laws are sensitively dependent on ideology and culture and religion and so forth. That's why things sometimes don't have happen for long periods, even though they're possible, and then suddenly do, or why things that have gone on for centuries suddenly stop at times. Something 'soft' changed.

My favorite example is the construction of the irrigations systems that made mass agriculture possible in the America Southwest. From a strict economic POV, it never made sense. It happened anyway, in large part for non-rational emotional reasons.

Quote:


We have some pretty good ideas about the tension of space, as well as certain laws like thermodynamics. While it is in principle possible that some unknown force could bridge these gaps it's still likely to have the exact effects I mentioned - releasing huge masses of heat energy merely as a side effect of being turned on. Also, any potential imaginary force that powerful would also have to coincide with all the other historical events ever recorded which conform to the standard model of particle physics in almost every way we can test. I'm not ruling out an X-Force, I'm just saying that it's pretty unlikely and if it exists it's still probably going to play with the same parameters of things like mass-energy. Despite some claims in Quantum Woo the actual observable parameters of quantum mechanics do actually have results that are pretty strongly parallel to classical mathematics and Newtonian mechanics. Thus any new X-Force, like quantum mechanics, is going to almost certainly end up giving you results which almost exactly match particle physics in almost all metrics. And as thermodynamics is more fundamental than any of the others you can pretty well count on anything being able to accomplish cosmological distortions of space involving cosmological levels of energy, and since all energy has waste heat (there is a mechanical minimum amount of waste heat for even the most efficient possible machine) that is going to be more than enough to flash-burn the Earth to a cinder.

Someone more knowledgeable than I calculated that warping space (as the Star Trek ships do) would require two entire galaxies worth of antimatter to use once.
Which would be impressive if we knew enough about the subject for those calculations to be much more than WAGs. We don't have a good idea about such subjects, because we can't perform experiments on the scale necessary to get the necessary confirming or falsifying data. We can test some aspects of some models, but the uncertainties are high and the speculation factor is large.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is offline   Reply With Quote
Old 03-18-2018, 12:21 AM   #75
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post



I was mainly going at computer AI. AI may be entirely realistic built on other principles, I just don't think spinning plates and gears are the way to do it. If I thought AI was realistic through simple improvements in computer technology, and I thought computer technology was heading in that direction, then I'd be inclined to include it in a hard science fiction game. But in either case I believe it would be a very alien intelligence, unless it was simply a brain in a jar.
Here we're in complete agreement.

The trouble with most uses of SAI in SF is that it tends to be 'human in a box' for story-telling purposes, which really doesn't make sense when examined closely. Nor does a society of equally-interacting, equal-status humans and AIs. It's one of those Cool tropes that falls apart when you really think about it.

I'm not sure even a 'human brain in a jar' would stay psychologically human-like on an open-ended basis. I suspect his thought processes would begin to diverge from 'baseline' humans fairly quickly.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is offline   Reply With Quote
Old 03-18-2018, 09:01 AM   #76
Ji ji
 
Ji ji's Avatar
 
Join Date: Feb 2012
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
I don't know the source material; I haven't seen the film in question. So I can't really give an informed answer.

My basic view is that the Searle Chinese room argument, and the Turing test, are invalid as statements of what it is to be "human." They seem to focus entirely on language. I don't think that typed or written or even spoken inputs and outputs are sufficient for humanity or for sapience.

Consider the Turing machine. Can I say to it, "How did you like the chicken masala we had for dinner?" or "doesn't that singer have a great voice?" or "beautiful weather, isn't it?" Can it comment on the color of my eyes, or whether the room is too hot or too cold? A human being, if not disabled in specific ways, would be able to perceive and think about a vast range of things of that sort, and discuss them, without having been prepared to do so in advance; the discussion would be based on their awareness of the world and of their existence in it, their embodiment, and their language would be a way of expressing and focusing that awareness. For example, C can say to me, "He wants your attention," and I can look around and see that our cat has come up to where I'm sitting and flopped onto his back on the floor next to me—and a purely symbol manipulating engine, even if it could notionally pass the Turing test or do the Searle trick, could not do such things. (Setting aside the idea that its inputs come from a simulated human body in a simulated physical world.)

In other words, as a human being, and more broadly a sapient one, I have intentionality: I can direct my awareness to physical entities, and use language to refer to them. My language is not just a self-contained system of symbols: It contains words that refer to the world, such as "I," "you," "they," "here," "there," "now," "then," and "thus," and such words can guide another person's attention to a common feature of the environment. This is a big part of the primary use of language, which is face to face communication. Exchanging messages over the Internet, or by teletype, as Turing imagined, is a specialized secondary use of language.

And I can tell that another person, or a nonhuman animal without speech, is conscious by seeing them move their body in a way that directs their attention to stimuli that provide interesting information. Which also points at the kind of things I would take as providing evidence that a machine was conscious.

I hope this is some help.
Quote:
Originally Posted by whswhs View Post
Well, on one hand, the brain does things that cannot be reduced to computation, too. It's an important source of chemical substances that influence human physiology and behavior, and it has intimate relations with the pituitary, which is the central performer in the human endocrine system.

It also outputs electrical signals to the muscles and such that don't take a binary on/off form. In fact no neurons do binary on/off; they do pulses with different frequencies. Of course you can model this mathematically, but then you can model the stresses and flows in the Earth's crust mathematically, and that doesn't mean that the Earth is a huge computer; it means that mathematics is science's characteristic tool.

I also think that talking about the relationship between the mind and the brain is a misleading phrasing. It's like talking about the relationship between the legs and running, as if running were a separate entity that somehow entered into an interaction with the legs.
The assumption that synaptic firing is the basic unit of mental function is not needed for a full simulation. By the way, we already know that it’s not the basic unit.
It’s not a real issue as we could simulate a brain on further levels, for example simulating every atom or every subatomic particle. It becomes just a problem of increased data size and complexity.
However: every physical aspect of the brain can be measured, and the sequence of its physical states and transformation can be reduced to an algorithm. In this level of reasoning, there is no world and the vision. There are physical alterations of specialised nerve cells hit by radiations, which in turn trigger a complex set of physical transformation in the brain. All of this can be simulated by a sequencer of physical states using a set of transformation rules (for example a computer).
The input is raw data with an appropriate format. It can be a simulation from another computer, or a sensor interfacing with the environment. As living beings we use a sensor (the eye); hopefully we’ll be soon able to build prostethic eyes/sensors for people with damaged eyes.

On the other hand, there are phenomena that we cannot simulate with a formal system. They are in two categories.
One is introspection or sentiency. As a qualitative phenomenon, it cannot be represented as formal system.
The second are undecidable propositions; we can manage them, but a formal system cannot.

The first is not a problem for the simulation. We never directly perceive the others’ experience of self; we just perceive their behaviours and we assume that they have self-awareness as we have*. In fact, if reality would be just a simulation feed to our brain, we could not tell the difference.

The second problem should be solved by the simulation use. Of course a computer simulating a mind would not have self-awareness or understanding of undecidable propositions, but it could behave like it had, and we couldn’t tell the diiference - as above.

*Regardless of speech. When my dog behaves happy, I perceive only her behaviour, and yet I assume that she is behaving happy because she feels happy. I know that she is happy because I have first-hand experience of feeling and happyness in myself, and so she has and knows. Even more amazing, this implies that we know the other one knows our feelings because it is like us.



PS: Sometimes I think that this board has the most stimulating discussions and the most brilliant minds in the whole internet. Thanks guys!

Last edited by Ji ji; 03-18-2018 at 09:11 AM.
Ji ji is offline   Reply With Quote
Old 03-18-2018, 10:03 AM   #77
whswhs
 
Join Date: Jun 2005
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Ji ji View Post
On the other hand, there are phenomena that we cannot simulate with a formal system. They are in two categories.
One is introspection or sentiency. As a qualitative phenomenon, it cannot be represented as formal system.
The second are undecidable propositions; we can manage them, but a formal system cannot.

The first is not a problem for the simulation. We never directly perceive the others’ experience of self; we just perceive their behaviours and we assume that they have self-awareness as we have*. In fact, if reality would be just a simulation feed to our brain, we could not tell the difference.

The second problem should be solved by the simulation use. Of course a computer simulating a mind would not have self-awareness or understanding of undecidable propositions, but it could behave like it had, and we couldn’t tell the diiference - as above.
It seems arbitrary for you to suppose that introspection necessarily gets credibility as a source of knowledge of our own consciousness/self-awareness, but extrospection doesn't get credit as a source of knowledge of an actually existing physical world. In fact, I think the other way around makes more sense:

If nothing exists, there can be no consciousness: a consciousness with nothing to be conscious of is a contradiction in terms. A consciousness conscious of nothing but itself is a contradiction in terms: before it could identify itself as consciousness, it had to be conscious of something. (Ayn Rand, Atlas Shrugged.)

And the reliance on formal systems is bound up with the idea that consciousness is primary. It goes back historically to David Hume's distinction between "relations of ideas" and "matters of fact," which was part of a philosophy in which there were no material objects, no cause and effect, no continuing self, no reliable memory, no other minds; there was only an ongoing stream of sensations, which constituted both all that we could know of physical reality and all that we could know of our own minds. For Hume, "relations of ideas" were purely formal. For existence-first or "outside-in" philosophers like Aristotle, logic wasn't just "relations of ideas": it was a statement of how things were in reality, to which human thought had to conform if it was to be about reality.

I think that consciousness is consciousness OF something. And I also think that consciousness is a process taking place in a physical being, and manifested in the activity of such a being—in particular, in its orientation to the things it's conscious OF.

When somebody's lying on the ground after a car crash, one of the things the paramedics check is whether the person is conscious. And they don't do this by engaging in arcane philosophical discussions, or by applying some sort of advanced scientific instruments; they attempt to get the person's attention, talk to them, see if they can track what's going on around them, and so on. They check whether they're engaged with the external world. So when philosophers talk about consciousness as some sort of mysterious inner state, they're talking about something entirely different from the ordinary meaning of "consciousness"; in fact, I think, about a philosophical chimera.

I would add that discussion of whether we "directly perceive" others' consciousness seems to involve another double standard. What does it mean to "directly perceive" something? I can see my checkbook on the desk in front of me. But that isn't some sort of causeless, meansless event; I perceive it because it reflects light, because light stimulates the cells of my retinae, because they fire nerve impulses, and so on. That doesn't seem "direct." On the other hand, saying that I don't directly perceive the checkbook doesn't entail that there is something else that I DO directly perceive, such as an image of a checkbook in some hidden realm of consciousness (what Dennett calls the "Cartesian theater"), or that I logically infer the physical checkbook from the inner representation of the checkbook; it entails that this idea of "direct perception" doesn't mean anything. I perceive that the checkbook is on my desk, and I perceive (for example) that my cat is awake and has heard something, and both of those are results of my brain putting together information from my senses—and not doing so through a process of logical inference. Perception is a physical process, just as much as flight is, or burning; a computer model of perception no more results in anything perceiving the physical world than a computer model of fire results in flame, smoke, or ash or consumes fuel.
__________________
Bill Stoddard

A human being should know how to live fast, die young, and leave a beautiful corpse. Specialization is for insects.

Last edited by whswhs; 03-18-2018 at 10:15 AM.
whswhs is offline   Reply With Quote
Old 03-18-2018, 11:45 AM   #78
malloyd
 
Join Date: Jun 2006
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
Perception is a physical process, just as much as flight is, or burning; a computer model of perception no more results in anything perceiving the physical world than a computer model of fire results in flame, smoke, or ash or consumes fuel.
I think that last is probably a step too far.

All of your senses are virtual doesn't seem like it disqualifies you from being conscious. And in fact is still perception of something in the physical world - yeah your consciousness *interprets* a physical pattern of voltages on the input pins of your chip in a decidedly different way than a human would, but it's still external physical stimulus being turned into mental model of the world. Possibly a very wrong model, but still.

Edit: And come to think of it the Chinese room as at least one sense (the one your questions come in through) and an output mechanism that can show attention to some particular part of the data obtains through it (it can in a classic internet discussion tactic come right out and *say* "I'm paying attention to this particular word in what you said and ignoring the rest"), so arguably it's just a difference of degree.

Not that a supposedly qualitative philosophical difference turning out to be quantitative would be exactly unprecedented.
__________________
--
MA Lloyd

Last edited by malloyd; 03-18-2018 at 11:53 AM.
malloyd is offline   Reply With Quote
Old 03-18-2018, 12:37 PM   #79
whswhs
 
Join Date: Jun 2005
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by malloyd View Post
I think that last is probably a step too far.

All of your senses are virtual doesn't seem like it disqualifies you from being conscious. And in fact is still perception of something in the physical world - yeah your consciousness *interprets* a physical pattern of voltages on the input pins of your chip in a decidedly different way than a human would, but it's still external physical stimulus being turned into mental model of the world. Possibly a very wrong model, but still.
I think that's a problematic extension of the term "perception."

First, a terminological note: You say that consciousness "interprets" something. Consciousness is not an entity or a physical subsystem within an entity; it doesn't have agency. It's the brain, or the organism, or the computer, that interprets things.

Now, if you say that any process in which stimulation of a physical system results in its forming a model of a world is "perception," that seems to include, for example, the classic brain in a box with electrodes, or the person trapped in the Matrix, or the person victimized by Descartes' "evil genius." In other words, it includes a wide range of the classic skeptical scenarios for perception being completely misleading and untrustworthy, in effect no different from hallucination. And if those are the case, then skeptical conclusions seem to follow from them: You cannot claim to know anything, not even that there are external physical stimuli.

The key here is your comment "perhaps a very wrong model." A process that can just as well create a very wrong model as a right one isn't perception.

A subordinate point is that you describe the computer as "perceiving" the voltages on various input pins. That seems exactly like saying that I "perceive" the firing of my retinal neurons. And that's a misleading way to describe it. I perceive a monitor screen with a big yellow patch, a smaller white patch, and some black shapes spread across the white patch; I interpret those shapes as words; but my retinal impulses are not what I perceive, but (part of) how I perceive. I don't say "Oh, I'm getting a frequency of X on this neuron's firing, and a frequency of Y on this one's," and so on for some vast number of neurons, and then deduce that they form a certain image; rather, I say "that's a monitor screen showing such and such." Indeed, taking it the other way, supposing that what we "perceive" is the internal electrical states of our brains, is another path to skeptical conclusions.

For comparison, if I put a thermometer into a roast game hen that I've taken out of the oven, what the thermometer is measuring is the internal temperature of the game hen; it's not measuring a voltage in a wire, though there is such a voltage and it forms part of the process that results in the measurement. And "perceive" is like "measure": It describes an action toward the physical world.
__________________
Bill Stoddard

A human being should know how to live fast, die young, and leave a beautiful corpse. Specialization is for insects.
whswhs is offline   Reply With Quote
Old 03-18-2018, 12:58 PM   #80
Anthony
 
Join Date: Feb 2005
Location: Berkeley, CA
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
The key here is your comment "perhaps a very wrong model." A process that can just as well create a very wrong model as a right one isn't perception.
It's not 'just as well', it's quite a bit of effort, but by your definition humans are incapable of perception. Senses most certainly can be deceived, and we're working on synthesizing them (mostly to treat problems, mind you, but you can play music on a cochlear implant).
__________________
My GURPS site and Blog.
Anthony is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 05:57 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2018, vBulletin Solutions, Inc.